The use of massive image databases has increased drastically over the few years due to evolution of multimedia technology.Image retrieval has become one of the vital tools in image processing applications.Content-Base...The use of massive image databases has increased drastically over the few years due to evolution of multimedia technology.Image retrieval has become one of the vital tools in image processing applications.Content-Based Image Retrieval(CBIR)has been widely used in varied applications.But,the results produced by the usage of a single image feature are not satisfactory.So,multiple image features are used very often for attaining better results.But,fast and effective searching for relevant images from a database becomes a challenging task.In the previous existing system,the CBIR has used the combined feature extraction technique using color auto-correlogram,Rotation-Invariant Uniform Local Binary Patterns(RULBP)and local energy.However,the existing system does not provide significant results in terms of recall and precision.Also,the computational complexity is higher for the existing CBIR systems.In order to handle the above mentioned issues,the Gray Level Co-occurrence Matrix(GLCM)with Deep Learning based Enhanced Convolution Neural Network(DLECNN)is proposed in this work.The proposed system framework includes noise reduction using histogram equalization,feature extraction using GLCM,similarity matching computation using Hierarchal and Fuzzy c-Means(HFCM)algorithm and the image retrieval using DLECNN algorithm.The histogram equalization has been used for computing the image enhancement.This enhanced image has a uniform histogram.Then,the GLCM method has been used to extract the features such as shape,texture,colour,annotations and keywords.The HFCM similarity measure is used for computing the query image vector's similarity index with every database images.For enhancing the performance of this image retrieval approach,the DLECNN algorithm is proposed to retrieve more accurate features of the image.The proposed GLCM+DLECNN algorithm provides better results associated with high accuracy,precision,recall,f-measure and lesser complexity.From the experimental results,it is clearly observed that the proposed system provides efficient image retrieval for the given query image.展开更多
The sporadic communication character of massive machine-type communication systems provides natural advantages to utilize the principle of compressive sensing(CS).However,due to the high computational complexity of CS...The sporadic communication character of massive machine-type communication systems provides natural advantages to utilize the principle of compressive sensing(CS).However,due to the high computational complexity of CS algorithms,CS-based contention-free access schemes have limited scalability and high computational complexity for massive access with user-specific pilots.To address these problems,in this paper,we propose a new contention-based scheme for CSbased massive access,which can support the sporadic access of massive devices(more than one million devices)with limited resources.Furthermore,an advanced receiver algorithm is designed to solve the optimal solutions for the proposed scheme,which utilizes various prior information to enhance the performance.In specific,the joint sparsity between the channel and data is used to improve the accuracy of pilot detection,and the information of modulation and cyclic redundancy check is exploited for channel correction to improve the performance of data recovery.The simulation results show that the proposed scheme can achieve improved active user detection performance and data recovery accuracy than existing methods.展开更多
A novel content based image retrieval (CBIR) algorithmusing relevant feedback is presented. The proposed frameworkhas three major contributions: a novel feature descriptor calledcolor spectral histogram (CSH) to ...A novel content based image retrieval (CBIR) algorithmusing relevant feedback is presented. The proposed frameworkhas three major contributions: a novel feature descriptor calledcolor spectral histogram (CSH) to measure the similarity betweenimages; two-dimensional matrix based indexing approach proposedfor short-term learning (STL); and long-term learning (LTL).In general, image similarities are measured from feature representationwhich includes color quantization, texture, color, shapeand edges. However, CSH can describe the image feature onlywith the histogram. Typically the image retrieval process starts byfinding the similarity between the query image and the imagesin the database; the major computation involved here is that theselection of top ranking images requires a sorting algorithm to beemployed at least with the lower bound of O(n log n). A 2D matrixbased indexing of images can enormously reduce the searchtime in STL. The same structure is used for LTL with an aim toreduce the amount of log to be maintained. The performance ofthe proposed framework is analyzed and compared with the existingapproaches, the quantified results indicates that the proposedfeature descriptor is more effectual than the existing feature descriptorsthat were originally developed for CBIR. In terms of STL,the proposed 2D matrix based indexing minimizes the computationeffort for retrieving similar images and for LTL, the proposed algorithmtakes minimum log information than the existing approaches.展开更多
Hepatic computed tomography(CT) images with Gabor function were analyzed.Then a threshold-based classification scheme was proposed using Gabor features and proceeded with the retrieval of the hepatic CT images.In our ...Hepatic computed tomography(CT) images with Gabor function were analyzed.Then a threshold-based classification scheme was proposed using Gabor features and proceeded with the retrieval of the hepatic CT images.In our experiments, a batch of hepatic CT images containing several types of CT findings was used and compared with the Zhao's image classification scheme, support vector machines(SVM) scheme and threshold-based scheme.展开更多
A hierarchical retrieval scheme of the accessory image database is proposed based on textile industrial accessory contour feature and region feature. At first smallest enclosed rectangle[1] feature (degree of accessor...A hierarchical retrieval scheme of the accessory image database is proposed based on textile industrial accessory contour feature and region feature. At first smallest enclosed rectangle[1] feature (degree of accessory coordination) is used to filter the image database to decouple the image search scope. After the accessory contour information and region information are extracted, the fusion multi-feature of the centroid distance Fourier descriptor and distance distribution histogram is adopted to finish image retrieval accurately. All the features above are invariable under translation, scaling and rotation. Results from the test on the image database including 1,000 accessory images demonstrate that the method is effective and practical with high accuracy and fast speed.展开更多
Developments in multimedia technologies have paved way for the storage of huge collections of video doc- uments on computer systems. It is essential to design tools for content-based access to the documents, so as to ...Developments in multimedia technologies have paved way for the storage of huge collections of video doc- uments on computer systems. It is essential to design tools for content-based access to the documents, so as to allow an efficient exploitation of these collections. Content based anal- ysis provides a flexible and powerful way to access video data when compared with the other traditional video analysis tech- niques. The area of content based video indexing and retrieval (CBVIR), focusing on automating the indexing, retrieval and management of video, has attracted extensive research in the last decade. CBVIR is a lively area of research with endur- ing acknowledgments from several domains. Herein a vital assessment of contemporary researches associated with the content-based indexing and retrieval of visual information. In this paper, we present an extensive review of significant researches on CBV1R. Concise description of content based video analysis along with the techniques associated with the content based video indexing and retrieval is presented.展开更多
With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applica...With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.展开更多
In this paper, we proposed a metric to measure the shift invariance of the three different contourlet transforms. And then, using the same structure texture image retrieval system which use subband coefficients energy...In this paper, we proposed a metric to measure the shift invariance of the three different contourlet transforms. And then, using the same structure texture image retrieval system which use subband coefficients energy, standard deviation and kurtosis features with Canberra distance, we gave a comparison of their texture description abilities. Experimental results show that contourlet-2.3 texture image retrieval system has almost retrieval rates with non-sub sampled contourlet system;the two systems have better retrieval results than the original contourlet retrieval system. On the other hand, for the relatively lower redundancy, we recommend using contourlet- 2.3 as texture description transform.展开更多
The Wireless Gigabit Alliance (WiGig) and I EEE 802.11 ad are developing a multigigabit wireless personal and local area network (WPAN/ WLAN) specification in the 60 GHz millimeter wave band. Chipset manufacturers...The Wireless Gigabit Alliance (WiGig) and I EEE 802.11 ad are developing a multigigabit wireless personal and local area network (WPAN/ WLAN) specification in the 60 GHz millimeter wave band. Chipset manufacturers, original equipment manufacturers (OEMs), and telecom companies are also assisting in this development. 60 GHz millimeter wave transmission will scale the speed of WLANs and WPANs to 6.75 Gbit/s over distances less than 10 meters. This technology is the first of its kind and will eliminate the need for cable around personal computers, docking stations, and other consumer electronic devices. High-definition multimedia interface (HDMI), display port, USB 3.0, and peripheral component interconnect express (PCle) 3.0 cables will all be eliminated. Fast downloads and uploads, wireless sync, and multi-gigabit-per-second WLANs will be possible over shorter distances. 60 GHz millimeter wave supports fast session transfer (FST) protocol, which makes it backward compatible with 5 GHz or 2.4 GHz WLAN so that end users experience the same range as in today' s WLANs. IEEE 802.1 lad specifies the physical (PHY) sublayer and medium access control (MAC) sublayer of the protocol stack. The MAC protocol is based on time-division multiple access (TDMA), and the PHY layer uses single carrier (SC) and orthogonal frequency division multiplexing (OFDM) to simultaneously enable low-power, high-performance applications.展开更多
The basic search algorithm to implement Motion Estimation ( ME ) in the H .263 encoder is a full search. It is simple but time consuming. Traditional search algorithms are fast, but may cause a fall in image qua...The basic search algorithm to implement Motion Estimation ( ME ) in the H .263 encoder is a full search. It is simple but time consuming. Traditional search algorithms are fast, but may cause a fall in image quality or an increase in bit rate in low bit rate applications. A fast search algorithm for ME with consideration on image content is proposed in this paper. Experiments show that the proposed algorithm can offer up to 70 percent savings in execution time with almost no sacrifice in PSNR and bit rate, compared with the full search.展开更多
This study endeavors to formulate a comprehensive methodology for establishing a Geological Knowledge Base(GKB)tailored to fracture-cavity reservoir outcrops within the North Tarim Basin.The acquisition of quantitativ...This study endeavors to formulate a comprehensive methodology for establishing a Geological Knowledge Base(GKB)tailored to fracture-cavity reservoir outcrops within the North Tarim Basin.The acquisition of quantitative geological parameters was accomplished through diverse means such as outcrop observations,thin section studies,unmanned aerial vehicle scanning,and high-resolution cameras.Subsequently,a three-dimensional digital outcrop model was generated,and the parameters were standardized.An assessment of traditional geological knowledge was conducted to delineate the knowledge framework,content,and system of the GKB.The basic parameter knowledge was extracted using multiscale fine characterization techniques,including core statistics,field observations,and microscopic thin section analysis.Key mechanism knowledge was identified by integrating trace elements from filling,isotope geochemical tests,and water-rock simulation experiments.Significant representational knowledge was then extracted by employing various methods such as multiple linear regression,neural network technology,and discriminant classification.Subsequently,an analogy study was performed on the karst fracture-cavity system(KFCS)in both outcrop and underground reservoir settings.The results underscored several key findings:(1)Utilization of a diverse range of techniques,including outcrop observations,core statistics,unmanned aerial vehicle scanning,high-resolution cameras,thin section analysis,and electron scanning imaging,enabled the acquisition and standardization of data.This facilitated effective management and integration of geological parameter data from multiple sources and scales.(2)The GKB for fracture-cavity reservoir outcrops,encompassing basic parameter knowledge,key mechanism knowledge,and significant representational knowledge,provides robust data support and systematic geological insights for the intricate and in-depth examination of the genetic mechanisms of fracture-cavity reservoirs.(3)The developmental characteristics of fracturecavities in karst outcrops offer effective,efficient,and accurate guidance for fracture-cavity research in underground karst reservoirs.The outlined construction method of the outcrop geological knowledge base is applicable to various fracture-cavity reservoirs in different layers and regions worldwide.展开更多
In the realm of contemporary artificial intelligence,machine learning enables automation,allowing systems to naturally acquire and enhance their capabilities through learning.In this cycle,Video recommendation is fini...In the realm of contemporary artificial intelligence,machine learning enables automation,allowing systems to naturally acquire and enhance their capabilities through learning.In this cycle,Video recommendation is finished by utilizing machine learning strategies.A suggestion framework is an interaction of data sifting framework,which is utilized to foresee the“rating”or“inclination”given by the different clients.The expectation depends on past evaluations,history,interest,IMDB rating,and so on.This can be carried out by utilizing collective and substance-based separating approaches which utilize the data given by the different clients,examine them,and afterward suggest the video that suits the client at that specific time.The required datasets for the video are taken from Grouplens.This recommender framework is executed by utilizing Python Programming Language.For building this video recommender framework,two calculations are utilized,for example,K-implies Clustering and KNN grouping.K-implies is one of the unaided AI calculations and the fundamental goal is to bunch comparable sort of information focuses together and discover the examples.For that K-implies searches for a steady‘k'of bunches in a dataset.A group is an assortment of information focuses collected due to specific similitudes.K-Nearest Neighbor is an administered learning calculation utilized for characterization,with the given information;KNN can group new information by examination of the‘k'number of the closest information focuses.The last qualities acquired are through bunching qualities and root mean squared mistake,by using this algorithm we can recommend videos more appropriately based on user previous records and ratings.展开更多
A novel joint kernel principal component analysis (PCA) and relational perspective map (RPM) method called KPmapper is proposed for hyperspectral dimensionality reduction and spectral feature recognition. Kernel P...A novel joint kernel principal component analysis (PCA) and relational perspective map (RPM) method called KPmapper is proposed for hyperspectral dimensionality reduction and spectral feature recognition. Kernel PCA is used to analyze hyperspectral data so that the major information corresponding to features can be better extracted. RPM is used to visualize hyperspectral data through two-dimensional (2D) maps, and it is an efficient approach to discover regularities and extract information by partitioning the data into pieces and mapping them onto a 2D space. The experimental results prove that the KPmapper algorithm can effectively obtain the intrinsic features in nonlinear high dimensional data. It is useful and impressing for dimensionality reduction and spectral feature recognition.展开更多
We investigate the spectral approaches to the problem of point pattern matching, and present a spectral feature descriptors based on partial least square (PLS). Given keypoints of two images, we define the position ...We investigate the spectral approaches to the problem of point pattern matching, and present a spectral feature descriptors based on partial least square (PLS). Given keypoints of two images, we define the position similarity matrices respectively, and extract the spectral features from the matrices by PLS, which indicate geometric distribution and inner relationships of the keypoints. Then the keypoints matching is done by bipartite graph matching. The experiments on both synthetic and real-world data corroborate the robustness and invariance of the algorithm.展开更多
文摘The use of massive image databases has increased drastically over the few years due to evolution of multimedia technology.Image retrieval has become one of the vital tools in image processing applications.Content-Based Image Retrieval(CBIR)has been widely used in varied applications.But,the results produced by the usage of a single image feature are not satisfactory.So,multiple image features are used very often for attaining better results.But,fast and effective searching for relevant images from a database becomes a challenging task.In the previous existing system,the CBIR has used the combined feature extraction technique using color auto-correlogram,Rotation-Invariant Uniform Local Binary Patterns(RULBP)and local energy.However,the existing system does not provide significant results in terms of recall and precision.Also,the computational complexity is higher for the existing CBIR systems.In order to handle the above mentioned issues,the Gray Level Co-occurrence Matrix(GLCM)with Deep Learning based Enhanced Convolution Neural Network(DLECNN)is proposed in this work.The proposed system framework includes noise reduction using histogram equalization,feature extraction using GLCM,similarity matching computation using Hierarchal and Fuzzy c-Means(HFCM)algorithm and the image retrieval using DLECNN algorithm.The histogram equalization has been used for computing the image enhancement.This enhanced image has a uniform histogram.Then,the GLCM method has been used to extract the features such as shape,texture,colour,annotations and keywords.The HFCM similarity measure is used for computing the query image vector's similarity index with every database images.For enhancing the performance of this image retrieval approach,the DLECNN algorithm is proposed to retrieve more accurate features of the image.The proposed GLCM+DLECNN algorithm provides better results associated with high accuracy,precision,recall,f-measure and lesser complexity.From the experimental results,it is clearly observed that the proposed system provides efficient image retrieval for the given query image.
基金supported by the Key-Area Research and Development Program of Guangdong Province under Grant 2019B010157002the Natural Science Foundation of China(61671046,61911530216,61725101,6196113039,U1834210)+4 种基金the Beijing Natural Science Foundation(4182050)the State Key Laboratory of Rail Traffic Control and Safety(RCS2020ZT010)of Beijing Jiaotong Universitythe Fundamental Research Funds for the Central Universities 2020JBM090the Royal Society Newton Advanced Fellowship under Grant NA191006NSFC Outstanding Youth Foundation under Grant 61725101。
文摘The sporadic communication character of massive machine-type communication systems provides natural advantages to utilize the principle of compressive sensing(CS).However,due to the high computational complexity of CS algorithms,CS-based contention-free access schemes have limited scalability and high computational complexity for massive access with user-specific pilots.To address these problems,in this paper,we propose a new contention-based scheme for CSbased massive access,which can support the sporadic access of massive devices(more than one million devices)with limited resources.Furthermore,an advanced receiver algorithm is designed to solve the optimal solutions for the proposed scheme,which utilizes various prior information to enhance the performance.In specific,the joint sparsity between the channel and data is used to improve the accuracy of pilot detection,and the information of modulation and cyclic redundancy check is exploited for channel correction to improve the performance of data recovery.The simulation results show that the proposed scheme can achieve improved active user detection performance and data recovery accuracy than existing methods.
文摘A novel content based image retrieval (CBIR) algorithmusing relevant feedback is presented. The proposed frameworkhas three major contributions: a novel feature descriptor calledcolor spectral histogram (CSH) to measure the similarity betweenimages; two-dimensional matrix based indexing approach proposedfor short-term learning (STL); and long-term learning (LTL).In general, image similarities are measured from feature representationwhich includes color quantization, texture, color, shapeand edges. However, CSH can describe the image feature onlywith the histogram. Typically the image retrieval process starts byfinding the similarity between the query image and the imagesin the database; the major computation involved here is that theselection of top ranking images requires a sorting algorithm to beemployed at least with the lower bound of O(n log n). A 2D matrixbased indexing of images can enormously reduce the searchtime in STL. The same structure is used for LTL with an aim toreduce the amount of log to be maintained. The performance ofthe proposed framework is analyzed and compared with the existingapproaches, the quantified results indicates that the proposedfeature descriptor is more effectual than the existing feature descriptorsthat were originally developed for CBIR. In terms of STL,the proposed 2D matrix based indexing minimizes the computationeffort for retrieving similar images and for LTL, the proposed algorithmtakes minimum log information than the existing approaches.
基金the National Natural Science Foundation of China (No. 30770589)
文摘Hepatic computed tomography(CT) images with Gabor function were analyzed.Then a threshold-based classification scheme was proposed using Gabor features and proceeded with the retrieval of the hepatic CT images.In our experiments, a batch of hepatic CT images containing several types of CT findings was used and compared with the Zhao's image classification scheme, support vector machines(SVM) scheme and threshold-based scheme.
文摘A hierarchical retrieval scheme of the accessory image database is proposed based on textile industrial accessory contour feature and region feature. At first smallest enclosed rectangle[1] feature (degree of accessory coordination) is used to filter the image database to decouple the image search scope. After the accessory contour information and region information are extracted, the fusion multi-feature of the centroid distance Fourier descriptor and distance distribution histogram is adopted to finish image retrieval accurately. All the features above are invariable under translation, scaling and rotation. Results from the test on the image database including 1,000 accessory images demonstrate that the method is effective and practical with high accuracy and fast speed.
文摘Developments in multimedia technologies have paved way for the storage of huge collections of video doc- uments on computer systems. It is essential to design tools for content-based access to the documents, so as to allow an efficient exploitation of these collections. Content based anal- ysis provides a flexible and powerful way to access video data when compared with the other traditional video analysis tech- niques. The area of content based video indexing and retrieval (CBVIR), focusing on automating the indexing, retrieval and management of video, has attracted extensive research in the last decade. CBVIR is a lively area of research with endur- ing acknowledgments from several domains. Herein a vital assessment of contemporary researches associated with the content-based indexing and retrieval of visual information. In this paper, we present an extensive review of significant researches on CBV1R. Concise description of content based video analysis along with the techniques associated with the content based video indexing and retrieval is presented.
文摘With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.
文摘In this paper, we proposed a metric to measure the shift invariance of the three different contourlet transforms. And then, using the same structure texture image retrieval system which use subband coefficients energy, standard deviation and kurtosis features with Canberra distance, we gave a comparison of their texture description abilities. Experimental results show that contourlet-2.3 texture image retrieval system has almost retrieval rates with non-sub sampled contourlet system;the two systems have better retrieval results than the original contourlet retrieval system. On the other hand, for the relatively lower redundancy, we recommend using contourlet- 2.3 as texture description transform.
文摘The Wireless Gigabit Alliance (WiGig) and I EEE 802.11 ad are developing a multigigabit wireless personal and local area network (WPAN/ WLAN) specification in the 60 GHz millimeter wave band. Chipset manufacturers, original equipment manufacturers (OEMs), and telecom companies are also assisting in this development. 60 GHz millimeter wave transmission will scale the speed of WLANs and WPANs to 6.75 Gbit/s over distances less than 10 meters. This technology is the first of its kind and will eliminate the need for cable around personal computers, docking stations, and other consumer electronic devices. High-definition multimedia interface (HDMI), display port, USB 3.0, and peripheral component interconnect express (PCle) 3.0 cables will all be eliminated. Fast downloads and uploads, wireless sync, and multi-gigabit-per-second WLANs will be possible over shorter distances. 60 GHz millimeter wave supports fast session transfer (FST) protocol, which makes it backward compatible with 5 GHz or 2.4 GHz WLAN so that end users experience the same range as in today' s WLANs. IEEE 802.1 lad specifies the physical (PHY) sublayer and medium access control (MAC) sublayer of the protocol stack. The MAC protocol is based on time-division multiple access (TDMA), and the PHY layer uses single carrier (SC) and orthogonal frequency division multiplexing (OFDM) to simultaneously enable low-power, high-performance applications.
文摘The basic search algorithm to implement Motion Estimation ( ME ) in the H .263 encoder is a full search. It is simple but time consuming. Traditional search algorithms are fast, but may cause a fall in image quality or an increase in bit rate in low bit rate applications. A fast search algorithm for ME with consideration on image content is proposed in this paper. Experiments show that the proposed algorithm can offer up to 70 percent savings in execution time with almost no sacrifice in PSNR and bit rate, compared with the full search.
基金supported by the Major Scientific and Technological Projects of CNPC under grant ZD2019-183-006the National Science and Technology Major Project of China (2016ZX05014002-006)the National Natural Science Foundation of China (42072234,42272180)。
文摘This study endeavors to formulate a comprehensive methodology for establishing a Geological Knowledge Base(GKB)tailored to fracture-cavity reservoir outcrops within the North Tarim Basin.The acquisition of quantitative geological parameters was accomplished through diverse means such as outcrop observations,thin section studies,unmanned aerial vehicle scanning,and high-resolution cameras.Subsequently,a three-dimensional digital outcrop model was generated,and the parameters were standardized.An assessment of traditional geological knowledge was conducted to delineate the knowledge framework,content,and system of the GKB.The basic parameter knowledge was extracted using multiscale fine characterization techniques,including core statistics,field observations,and microscopic thin section analysis.Key mechanism knowledge was identified by integrating trace elements from filling,isotope geochemical tests,and water-rock simulation experiments.Significant representational knowledge was then extracted by employing various methods such as multiple linear regression,neural network technology,and discriminant classification.Subsequently,an analogy study was performed on the karst fracture-cavity system(KFCS)in both outcrop and underground reservoir settings.The results underscored several key findings:(1)Utilization of a diverse range of techniques,including outcrop observations,core statistics,unmanned aerial vehicle scanning,high-resolution cameras,thin section analysis,and electron scanning imaging,enabled the acquisition and standardization of data.This facilitated effective management and integration of geological parameter data from multiple sources and scales.(2)The GKB for fracture-cavity reservoir outcrops,encompassing basic parameter knowledge,key mechanism knowledge,and significant representational knowledge,provides robust data support and systematic geological insights for the intricate and in-depth examination of the genetic mechanisms of fracture-cavity reservoirs.(3)The developmental characteristics of fracturecavities in karst outcrops offer effective,efficient,and accurate guidance for fracture-cavity research in underground karst reservoirs.The outlined construction method of the outcrop geological knowledge base is applicable to various fracture-cavity reservoirs in different layers and regions worldwide.
文摘In the realm of contemporary artificial intelligence,machine learning enables automation,allowing systems to naturally acquire and enhance their capabilities through learning.In this cycle,Video recommendation is finished by utilizing machine learning strategies.A suggestion framework is an interaction of data sifting framework,which is utilized to foresee the“rating”or“inclination”given by the different clients.The expectation depends on past evaluations,history,interest,IMDB rating,and so on.This can be carried out by utilizing collective and substance-based separating approaches which utilize the data given by the different clients,examine them,and afterward suggest the video that suits the client at that specific time.The required datasets for the video are taken from Grouplens.This recommender framework is executed by utilizing Python Programming Language.For building this video recommender framework,two calculations are utilized,for example,K-implies Clustering and KNN grouping.K-implies is one of the unaided AI calculations and the fundamental goal is to bunch comparable sort of information focuses together and discover the examples.For that K-implies searches for a steady‘k'of bunches in a dataset.A group is an assortment of information focuses collected due to specific similitudes.K-Nearest Neighbor is an administered learning calculation utilized for characterization,with the given information;KNN can group new information by examination of the‘k'number of the closest information focuses.The last qualities acquired are through bunching qualities and root mean squared mistake,by using this algorithm we can recommend videos more appropriately based on user previous records and ratings.
基金supported by the National Natural Science Foundation of China(No.40901200)the China Scholarship Council(No.2009686004)the Outstanding Postgraduate Dissertation Cultivating Program of Nanjing Normal University (No.1243211601040)
文摘A novel joint kernel principal component analysis (PCA) and relational perspective map (RPM) method called KPmapper is proposed for hyperspectral dimensionality reduction and spectral feature recognition. Kernel PCA is used to analyze hyperspectral data so that the major information corresponding to features can be better extracted. RPM is used to visualize hyperspectral data through two-dimensional (2D) maps, and it is an efficient approach to discover regularities and extract information by partitioning the data into pieces and mapping them onto a 2D space. The experimental results prove that the KPmapper algorithm can effectively obtain the intrinsic features in nonlinear high dimensional data. It is useful and impressing for dimensionality reduction and spectral feature recognition.
基金supported by the Northwestern Polytechnical University Doctoral Dissertation Innovation Foundation (No. CX200819)the National Natural Science Foundation of China (No. 60375003)+1 种基金the Astronautics Basal Science Foundation of China (No. 03I53059)the Science and Technology Innovation Foundation of the Northwestern Polytechnical University(No. 2007KJ01033)
文摘We investigate the spectral approaches to the problem of point pattern matching, and present a spectral feature descriptors based on partial least square (PLS). Given keypoints of two images, we define the position similarity matrices respectively, and extract the spectral features from the matrices by PLS, which indicate geometric distribution and inner relationships of the keypoints. Then the keypoints matching is done by bipartite graph matching. The experiments on both synthetic and real-world data corroborate the robustness and invariance of the algorithm.