This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to use...This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.展开更多
Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster ...Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.展开更多
As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for th...As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for these algorithms.In this paper,we introduce the adaptive multi-strategy Rabbit Algorithm(RA).RA is inspired by the social interactions of rabbits,incorporating elements such as exploration,exploitation,and adaptation to address optimization challenges.It employs three distinct subgroups,comprising male,female,and child rabbits,to execute a multi-strategy search.Key parameters,including distance factor,balance factor,and learning factor,strike a balance between precision and computational efficiency.We offer practical recommendations for fine-tuning five essential RA parameters,making them versatile and independent.RA is capable of autonomously selecting adaptive parameter settings and mutation strategies,enabling it to successfully tackle a range of 17 CEC05 benchmark functions with dimensions scaling up to 5000.The results underscore RA’s superior performance in large-scale optimization tasks,surpassing other state-of-the-art metaheuristics in convergence speed,computational precision,and scalability.Finally,RA has demonstrated its proficiency in solving complicated optimization problems in real-world engineering by completing 10 problems in CEC2020.展开更多
Deep Underground Science and Engineering(DUSE)is pleased to present this special issue highlighting recent advancements in underground large-scale energy storage technologies.This issue comprises 19 articles:six from ...Deep Underground Science and Engineering(DUSE)is pleased to present this special issue highlighting recent advancements in underground large-scale energy storage technologies.This issue comprises 19 articles:six from our special issue"Underground large-scale energy storage technologies in the context of carbon neutrality",11 from regular submissions on related topics,and two from early regular submissions.These contributions include five review articles,one perspective article,and 13 research articles.The increased volume of this issue and later issues reflects DUSE's commitment to addressing the rapid growth in submissions and the current backlog of high-quality papers.展开更多
0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly ...0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly prominent.China has implemented and completed several largescale land infilling and excavation projects(Figure 1),which have become the main way to increase land resources and expand construction land.展开更多
It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (...It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically dusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision.tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.展开更多
Density-based algorithm for discovering clusters in large spatial databases with noise(DBSCAN) is a classic kind of density-based spatial clustering algorithm and is widely applied in several aspects due to good perfo...Density-based algorithm for discovering clusters in large spatial databases with noise(DBSCAN) is a classic kind of density-based spatial clustering algorithm and is widely applied in several aspects due to good performance in capturing arbitrary shapes and detecting outliers. However, in practice, datasets are always too massive to fit the serial DBSCAN. And a new parallel algorithm-Parallel DBSCAN(PDBSCAN) was proposed to solve the problem which DBSCAN faced. The proposed parallel algorithm bases on MapReduce mechanism. The usage of parallel mechanism in the algorithm focuses on region query and candidate queue processing which needed substantive computation resources. As a result, PDBSCAN is scalable for large-scale dataset clustering and is extremely suitable for applications in E-Commence, especially for recommendation.展开更多
The purpose of this application paper is to apply the Stein-Chen (SC) method to provide a Poisson-based approximation and corresponding total variation distance bounds in a time series context. The SC method that is u...The purpose of this application paper is to apply the Stein-Chen (SC) method to provide a Poisson-based approximation and corresponding total variation distance bounds in a time series context. The SC method that is used approximates the probability density function (PDF) defined on how many times a pattern such as I<sub>t</sub>,I<sub>t</sub><sub>+1</sub>,I<sub>t</sub><sub>+2</sub> = {1 0 1} occurs starting at position t in a time series of length N that has been converted to binary values using a threshold. The original time series that is converted to binary is assumed to consist of a sequence of independent random variables, and could, for example, be a series of residuals that result from fitting any type of time series model. Note that if {1 0 1} is known to not occur, for example, starting at position t = 1, then this information impacts the probability that {1 0 1} occurs starting at position t = 2 or t = 3, because the trials to obtain {1 0 1} are overlapping and thus not independent, so the Poisson distribution assumptions are not met. Nevertheless, the results shown in four examples demonstrate that Poisson-based approximation (that is strictly correct only for independent trials) can be remarkably accurate, and the SC method provides a bound on the total variation distance between the true and approximate PDF.展开更多
The metamorphosed sedimentary type of iron deposits(BIF) is the most important type of iron deposits in the world, and super-large iron ore clusters of this type include the Quadrilatero Ferrifero district and Caraj...The metamorphosed sedimentary type of iron deposits(BIF) is the most important type of iron deposits in the world, and super-large iron ore clusters of this type include the Quadrilatero Ferrifero district and Carajas in Brazil, Hamersley in Australia, Kursk in Russia, Central Province of India and Anshan-Benxi in China. Subordinated types of iron deposits are magmatic, volcanic-hosted and sedimentary ones. This paper briefly introduces the geological characteristics of major super-large iron ore clusters in the world. The proven reserves of iron ores in China are relatively abundant, but they are mainly low-grade ores. Moreover, a considerate part of iron ores are difficult to utilize for their difficult ore dressing, deep burial or other reasons. Iron ore deposits are relatively concentrated in 11 metallogenic provinces(belts), such as the Anshan-Benxi, eastern Hebei, Xichang-Central Yunnan Province and middle-lower reaches of Yangtze River. The main minerogenetic epoches vary widely from the Archean to Quaternary, and are mainly the Late Archean to Middle Proterozoic, Variscan, and Yanshanian periods. The main 7 genetic types of iron deposits in China are metamorphosed sedimentary type(BIF), magmatic type, volcanic-hosted type, skarn type, hydrothermal type, sedimentary type and weathered leaching type. The iron-rich ores occur predominantly in the skarn and marine volcanic-hosted iron deposits, locally in the metamorphosed sedimentary type(BIF) as hydrothermal reformation products. The theory of minerogenetic series of mineral deposits and minerogenic models has applied in investigation and prospecting of iron ore deposits. A combination of deep analyses of aeromagnetic anomalies and geomagnetic anomalies, with gravity anomalies are an effective method to seeking large and deep-buried iron deposits. China has a relatively great oresearching potential of iron ores, especially for metamorphosed sedimentary, skarn, and marine volcanic-hosted iron deposits. For the lower guarantee degree of iron and steel industry, China should give a trading and open the foreign mining markets.展开更多
A method to synthesize anticancer drug N-( 4- hydroxyphenyl) retinamide (4-HPR)on a large scale is described. It consists of the preferred steps of reacting all-trans retinoic acid with thionyl chloride to form re...A method to synthesize anticancer drug N-( 4- hydroxyphenyl) retinamide (4-HPR)on a large scale is described. It consists of the preferred steps of reacting all-trans retinoic acid with thionyl chloride to form retinoyl chloride and then reacting with triethylamine to generate retinoyl ammonium salt which in turn is reacted with p-aminophenol to eventually produce 4-HPR. This process can overcome many scale-up challenges that exist in the methods reported in the literature and provide an easy, mild and high yield route for large scale synthesis of 4-HPR. Moreover, the effects of the molar ratios of the reagents on the yield are examined. The best molar ratios are a 2.0 molar equivalence of thionyl chloride and a 3.0 molar equivalence of paminophenol to retinoic acid, and the total yield is 80. 7%.展开更多
[Objective] This study aimed to develop ACGM markers for the clustering analysis of large grained Brassica napus materials. [Method] A total of 44 pairs of ACGM primers were designed according to 18 genes related to A...[Objective] This study aimed to develop ACGM markers for the clustering analysis of large grained Brassica napus materials. [Method] A total of 44 pairs of ACGM primers were designed according to 18 genes related to Arabidopsis grain development and their homologous rape EST sequences. After electrophoresis, 18 pairs of ACGM primers were selected for the clustering analysis of 16 larger grained samples and four fine grained samples of rapeseed. [Result] PCR result showed that 2-6 specific bands were respectively amplified by each pair of primes, and all the bands were polymorphic and repeatable, suggesting that the optimized ACGM markers were useful for clustering analysis of B. napus species. Clustering analysis revealed that the 20 rapeseed samples were divided into three clusters A, B, and C at similarity coefficient 0.6. Then, the clusters A and B were further divided into five sub clusters A1, A2, A3, B1 and B2 at similarity coefficient 0.67. [Conclusion] This study will provide theoretical and practical values for rape breeding.展开更多
A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, th...A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.展开更多
The regional hydrological system is extremely complex because it is affected not only by physical factors but also by human dimensions.And the hydrological models play a very important role in simulating the complex s...The regional hydrological system is extremely complex because it is affected not only by physical factors but also by human dimensions.And the hydrological models play a very important role in simulating the complex system.However,there have not been effective methods for the model reliability and uncertainty analysis due to its complexity and difficulty.The uncertainties in hydrological modeling come from four important aspects:uncertainties in input data and parameters,uncertainties in model structure,uncertainties in analysis method and the initial and boundary conditions.This paper systematically reviewed the recent advances in the study of the uncertainty analysis approaches in the large-scale complex hydrological model on the basis of uncertainty sources.Also,the shortcomings and insufficiencies in the uncertainty analysis for complex hydrological models are pointed out.And then a new uncertainty quantification platform PSUADE and its uncertainty quantification methods were introduced,which will be a powerful tool and platform for uncertainty analysis of large-scale complex hydrological models.Finally,some future perspectives on uncertainty quantification are put forward.展开更多
This study investigates the dominant modes of variability in monthly and seasonal rainfall over the India-China region mainly through Empirical Orthogonal Function (EOF) analysis. The EOFs have shown that whereas the ...This study investigates the dominant modes of variability in monthly and seasonal rainfall over the India-China region mainly through Empirical Orthogonal Function (EOF) analysis. The EOFs have shown that whereas the rainfall over India varies as one coherent zone, that over China varies in east-west oriented bands. The influence of this banded structure extends well into India.Relationship of rainfall with large scale parameters such as the subtropical ridge over the Indian and the western Pacific regions, Southern Oscillation, the Northern Hemispheric surface air temperature and stratospheric winds have also been investigated. These results show that the rainfall over the area around 40°N, 110°E over China is highly related with rainfall over India. The subtropical ridge over the Indian region is an important predictor over India as well an over the northern China region. '展开更多
We present the design and performance of a home-built scanning tunneling microscope (STM), which is compact (66 mm tall and 25 mm in diameter), yet equipped with a 3D atomic precision piezoelectric motor in which ...We present the design and performance of a home-built scanning tunneling microscope (STM), which is compact (66 mm tall and 25 mm in diameter), yet equipped with a 3D atomic precision piezoelectric motor in which the Z coarse approach relies on a high simplic-ity friction-type walker (of our own invention) driven by an axially cut piezoelectric tube. The walker is vertically inserted in a piezoelectric scanner tube (PST) with its brim laying at on the PST end as the inertial slider (driven by the PST) for the XZ (sample plane) motion. The STM is designed to be capable of searching rare microscopic targets (defects, dopants, boundaries, nano-devices, etc.) in a macroscopic sample area (square millimeters) under extreme conditions (low temperatures, strong magnetic elds, etc.) in which it ts. It gives good atomic resolution images after scanning a highly oriented pyrolytic graphite sample in air at room temperature.展开更多
We present a deterministic algorithm for large-scale VLSI module placement. Following the less flexibility first (LFF) principle,we simulate a manual packing process in which the concept of placement by stages is in...We present a deterministic algorithm for large-scale VLSI module placement. Following the less flexibility first (LFF) principle,we simulate a manual packing process in which the concept of placement by stages is introduced to reduce the overall evaluation complexity. The complexity of the proposed algorithm is (N1 + N2 ) × O( n^2 ) + N3× O(n^4lgn) ,where N1, N2 ,and N3 denote the number of modules in each stage, N1 + N2 + N3 = n, and N3〈〈 n. This complexity is much less than the original time complexity of O(n^5lgn). Experimental results indicate that this approach is quite promising.展开更多
Human pluripotent stem cells(hPSCs), including human embryonic stem cells and human induced pluripotent stem cells, are promising sources for hematopoietic cells due to their unlimited growth capacity and the pluripot...Human pluripotent stem cells(hPSCs), including human embryonic stem cells and human induced pluripotent stem cells, are promising sources for hematopoietic cells due to their unlimited growth capacity and the pluripotency. Dendritic cells(DCs), the unique immune cells in the hematopoietic system, can be loaded with tumor specific antigen and used as vaccine for cancer immunotherapy. While autologous DCs from peripheral blood are limited in cell number, hPSC-derived DCs provide a novel alternative cell source which has the potential for large scale production. This review summarizes recent advances in differentiating hPSCs to DCs through the intermediate stage of hematopoietic stem cells. Step-wise growth factor induction has been used to derive DCs from hPSCs either in suspension cultureof embryoid bodies(EBs) or in co-culture with stromal cells. To fulfill the clinical potential of the DCs derived from hPSCs, the bioprocess needs to be scaled up to produce a large number of cells economically under tight quality control. This requires the development of novel bioreactor systems combining guided EB-based differentiation with engineered culture environment. Hence, recent progress in using bioreactors for hPSC lineage-specific differentiation is reviewed. In particular, the potential scale up strategies for the multistage DC differentiation and the effect of shear stress on hPSC differentiation in bioreactors are discussed in detail.展开更多
Mycothiol (MSH) is the major low molecular weight thiol in most actinomycetes. Chemical synthesis of MSH is of value for enzymology and inhibitor screening assays, but is hampered by difficulties in large scale sysn...Mycothiol (MSH) is the major low molecular weight thiol in most actinomycetes. Chemical synthesis of MSH is of value for enzymology and inhibitor screening assays, but is hampered by difficulties in large scale sysnthesis. We achieved the total synthesis of MSH by linking 2-camphanoyl-3,4,5,6-tetra-O-benzyl-D-rnyo-inositol (D-1) and 2-deoxy-2-azido-3,4,6-tri-O-benzyl- 1-p-toluene-thio-o-glucoside (2) first, followed by coupling with N-Boc-S-acetyl-L-cysteine (3). This route of synthesis allowed the efficient and convenient synthesis of mycothiol on a large scale.展开更多
We study how to use the SR1 update to realize minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information from the last m iterations. The numer...We study how to use the SR1 update to realize minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information from the last m iterations. The numerical tests show that the method is efficent.展开更多
Local diversity AdaBoost support vector machine(LDAB-SVM)is proposed for large scale dataset classification problems.The training dataset is split into several blocks firstly,and some models based on these dataset blo...Local diversity AdaBoost support vector machine(LDAB-SVM)is proposed for large scale dataset classification problems.The training dataset is split into several blocks firstly,and some models based on these dataset blocks are built.In order to obtain a better performance,AdaBoost is used in each model building.In the boosting iteration step,the component learners which have higher diversity and accuracy are collected via the kernel parameters adjusting.Then the local models via voting method are integrated.The experimental study shows that LDAB-SVM can deal with large scale dataset efficiently without reducing the performance of the classifier.展开更多
基金funded by the Office of the Vice-President for Research and Development of Cebu Technological University.
文摘This study demonstrates a novel integration of large language models,machine learning,and multicriteria decision-making to investigate self-moderation in small online communities,a topic under-explored compared to user behavior and platform-driven moderation on social media.The proposed methodological framework(1)utilizes large language models for social media post analysis and categorization,(2)employs k-means clustering for content characterization,and(3)incorporates the TODIM(Tomada de Decisão Interativa Multicritério)method to determine moderation strategies based on expert judgments.In general,the fully integrated framework leverages the strengths of these intelligent systems in a more systematic evaluation of large-scale decision problems.When applied in social media moderation,this approach promotes nuanced and context-sensitive self-moderation by taking into account factors such as cultural background and geographic location.The application of this framework is demonstrated within Facebook groups.Eight distinct content clusters encompassing safety,harassment,diversity,and misinformation are identified.Analysis revealed a preference for content removal across all clusters,suggesting a cautious approach towards potentially harmful content.However,the framework also highlights the use of other moderation actions,like account suspension,depending on the content category.These findings contribute to the growing body of research on self-moderation and offer valuable insights for creating safer and more inclusive online spaces within smaller communities.
基金the National Natural Science Foundation of China (Nos. 60533090 and 60603096)the National Hi-Tech Research and Development Program (863) of China (No. 2006AA010107)+2 种基金the Key Technology R&D Program of China (No. 2006BAH02A13-4)the Program for Changjiang Scholars and Innovative Research Team in University of China (No. IRT0652)the Cultivation Fund of the Key Scientific and Technical Innovation Project of MOE, China (No. 706033)
文摘Recently a new clustering algorithm called 'affinity propagation' (AP) has been proposed, which efficiently clustered sparsely related data by passing messages between data points. However, we want to cluster large scale data where the similarities are not sparse in many cases. This paper presents two variants of AP for grouping large scale data with a dense similarity matrix. The local approach is partition affinity propagation (PAP) and the global method is landmark affinity propagation (LAP). PAP passes messages in the subsets of data first and then merges them as the number of initial step of iterations; it can effectively reduce the number of iterations of clustering. LAP passes messages between the landmark data points first and then clusters non-landmark data points; it is a large global approximation method to speed up clustering. Experiments are conducted on many datasets, such as random data points, manifold subspaces, images of faces and Chinese calligraphy, and the results demonstrate that the two ap-proaches are feasible and practicable.
文摘As optimization problems continue to grow in complexity,the need for effective metaheuristic algorithms becomes increasingly evident.However,the challenge lies in identifying the right parameters and strategies for these algorithms.In this paper,we introduce the adaptive multi-strategy Rabbit Algorithm(RA).RA is inspired by the social interactions of rabbits,incorporating elements such as exploration,exploitation,and adaptation to address optimization challenges.It employs three distinct subgroups,comprising male,female,and child rabbits,to execute a multi-strategy search.Key parameters,including distance factor,balance factor,and learning factor,strike a balance between precision and computational efficiency.We offer practical recommendations for fine-tuning five essential RA parameters,making them versatile and independent.RA is capable of autonomously selecting adaptive parameter settings and mutation strategies,enabling it to successfully tackle a range of 17 CEC05 benchmark functions with dimensions scaling up to 5000.The results underscore RA’s superior performance in large-scale optimization tasks,surpassing other state-of-the-art metaheuristics in convergence speed,computational precision,and scalability.Finally,RA has demonstrated its proficiency in solving complicated optimization problems in real-world engineering by completing 10 problems in CEC2020.
文摘Deep Underground Science and Engineering(DUSE)is pleased to present this special issue highlighting recent advancements in underground large-scale energy storage technologies.This issue comprises 19 articles:six from our special issue"Underground large-scale energy storage technologies in the context of carbon neutrality",11 from regular submissions on related topics,and two from early regular submissions.These contributions include five review articles,one perspective article,and 13 research articles.The increased volume of this issue and later issues reflects DUSE's commitment to addressing the rapid growth in submissions and the current backlog of high-quality papers.
基金funded by the Key Research and Development Program of Shaanxi Province(No.2024SFYBXM-669)the National Natural Science Foundation of China(No.42271078)。
文摘0 INTRODUCTION Due to the rapid population growth and the accelerated urbanization process,the contradiction between the demand for expanding ground space and the limited available land scale is becoming increasingly prominent.China has implemented and completed several largescale land infilling and excavation projects(Figure 1),which have become the main way to increase land resources and expand construction land.
基金National Natural Science Foundation of China ( No. 61070033 )Fundamental Research Funds for the Central Universities,China( No. 2012ZM0061)
文摘It is a challenging topic to develop an efficient algorithm for large scale classification problems in many applications of machine learning. In this paper, a hierarchical clustering and fixed- layer local learning (HCFLL) based support vector machine(SVM) algorithm is proposed to deal with this problem. Firstly, HCFLL hierarchically dusters a given dataset into a modified clustering feature tree based on the ideas of unsupervised clustering and supervised clustering. Then it locally trains SVM on each labeled subtree at a fixed-layer of the tree. The experimental results show that compared with the existing popular algorithms such as core vector machine and decision.tree support vector machine, HCFLL can significantly improve the training and testing speeds with comparable testing accuracy.
基金National Natural Science Foundations of China( No. 61070101,No. 60875029,No. 61175048)
文摘Density-based algorithm for discovering clusters in large spatial databases with noise(DBSCAN) is a classic kind of density-based spatial clustering algorithm and is widely applied in several aspects due to good performance in capturing arbitrary shapes and detecting outliers. However, in practice, datasets are always too massive to fit the serial DBSCAN. And a new parallel algorithm-Parallel DBSCAN(PDBSCAN) was proposed to solve the problem which DBSCAN faced. The proposed parallel algorithm bases on MapReduce mechanism. The usage of parallel mechanism in the algorithm focuses on region query and candidate queue processing which needed substantive computation resources. As a result, PDBSCAN is scalable for large-scale dataset clustering and is extremely suitable for applications in E-Commence, especially for recommendation.
文摘The purpose of this application paper is to apply the Stein-Chen (SC) method to provide a Poisson-based approximation and corresponding total variation distance bounds in a time series context. The SC method that is used approximates the probability density function (PDF) defined on how many times a pattern such as I<sub>t</sub>,I<sub>t</sub><sub>+1</sub>,I<sub>t</sub><sub>+2</sub> = {1 0 1} occurs starting at position t in a time series of length N that has been converted to binary values using a threshold. The original time series that is converted to binary is assumed to consist of a sequence of independent random variables, and could, for example, be a series of residuals that result from fitting any type of time series model. Note that if {1 0 1} is known to not occur, for example, starting at position t = 1, then this information impacts the probability that {1 0 1} occurs starting at position t = 2 or t = 3, because the trials to obtain {1 0 1} are overlapping and thus not independent, so the Poisson distribution assumptions are not met. Nevertheless, the results shown in four examples demonstrate that Poisson-based approximation (that is strictly correct only for independent trials) can be remarkably accurate, and the SC method provides a bound on the total variation distance between the true and approximate PDF.
基金supported by the National Natural Science Foundation of China (grant No. 40773038the Program of High-level Geological Talents (201309)Youth Geological Talents (201112) of the China Geological Survey
文摘The metamorphosed sedimentary type of iron deposits(BIF) is the most important type of iron deposits in the world, and super-large iron ore clusters of this type include the Quadrilatero Ferrifero district and Carajas in Brazil, Hamersley in Australia, Kursk in Russia, Central Province of India and Anshan-Benxi in China. Subordinated types of iron deposits are magmatic, volcanic-hosted and sedimentary ones. This paper briefly introduces the geological characteristics of major super-large iron ore clusters in the world. The proven reserves of iron ores in China are relatively abundant, but they are mainly low-grade ores. Moreover, a considerate part of iron ores are difficult to utilize for their difficult ore dressing, deep burial or other reasons. Iron ore deposits are relatively concentrated in 11 metallogenic provinces(belts), such as the Anshan-Benxi, eastern Hebei, Xichang-Central Yunnan Province and middle-lower reaches of Yangtze River. The main minerogenetic epoches vary widely from the Archean to Quaternary, and are mainly the Late Archean to Middle Proterozoic, Variscan, and Yanshanian periods. The main 7 genetic types of iron deposits in China are metamorphosed sedimentary type(BIF), magmatic type, volcanic-hosted type, skarn type, hydrothermal type, sedimentary type and weathered leaching type. The iron-rich ores occur predominantly in the skarn and marine volcanic-hosted iron deposits, locally in the metamorphosed sedimentary type(BIF) as hydrothermal reformation products. The theory of minerogenetic series of mineral deposits and minerogenic models has applied in investigation and prospecting of iron ore deposits. A combination of deep analyses of aeromagnetic anomalies and geomagnetic anomalies, with gravity anomalies are an effective method to seeking large and deep-buried iron deposits. China has a relatively great oresearching potential of iron ores, especially for metamorphosed sedimentary, skarn, and marine volcanic-hosted iron deposits. For the lower guarantee degree of iron and steel industry, China should give a trading and open the foreign mining markets.
文摘A method to synthesize anticancer drug N-( 4- hydroxyphenyl) retinamide (4-HPR)on a large scale is described. It consists of the preferred steps of reacting all-trans retinoic acid with thionyl chloride to form retinoyl chloride and then reacting with triethylamine to generate retinoyl ammonium salt which in turn is reacted with p-aminophenol to eventually produce 4-HPR. This process can overcome many scale-up challenges that exist in the methods reported in the literature and provide an easy, mild and high yield route for large scale synthesis of 4-HPR. Moreover, the effects of the molar ratios of the reagents on the yield are examined. The best molar ratios are a 2.0 molar equivalence of thionyl chloride and a 3.0 molar equivalence of paminophenol to retinoic acid, and the total yield is 80. 7%.
基金Supported by the National Natural Science Foundation of China(30860147)Open Funds of National Key Laboratory of Crop Genetic Improvement(ZK200902)Natural Science Foundation of Yunnan Province(2011FB117)~~
文摘[Objective] This study aimed to develop ACGM markers for the clustering analysis of large grained Brassica napus materials. [Method] A total of 44 pairs of ACGM primers were designed according to 18 genes related to Arabidopsis grain development and their homologous rape EST sequences. After electrophoresis, 18 pairs of ACGM primers were selected for the clustering analysis of 16 larger grained samples and four fine grained samples of rapeseed. [Result] PCR result showed that 2-6 specific bands were respectively amplified by each pair of primes, and all the bands were polymorphic and repeatable, suggesting that the optimized ACGM markers were useful for clustering analysis of B. napus species. Clustering analysis revealed that the 20 rapeseed samples were divided into three clusters A, B, and C at similarity coefficient 0.6. Then, the clusters A and B were further divided into five sub clusters A1, A2, A3, B1 and B2 at similarity coefficient 0.67. [Conclusion] This study will provide theoretical and practical values for rape breeding.
基金the National Natural Science Foundation of China(10471062)the Natural Science Foundation of Jiangsu Province(BK2006184)~~
文摘A new limited memory symmetric rank one algorithm is proposed. It combines a modified self-scaled symmetric rank one (SSR1) update with the limited memory and nonmonotone line search technique. In this algorithm, the descent search direction is generated by inverse limited memory SSR1 update, thus simplifying the computation. Numerical comparison of the algorithm and the famous limited memory BFGS algorithm is given. Comparison results indicate that the new algorithm can process a kind of large-scale unconstrained optimization problems.
基金National Key Basic Research Program of China,No.2010CB428403National Grand Science and Technology Special Project of Water Pollution Control and Improvement,No.2009ZX07210-006
文摘The regional hydrological system is extremely complex because it is affected not only by physical factors but also by human dimensions.And the hydrological models play a very important role in simulating the complex system.However,there have not been effective methods for the model reliability and uncertainty analysis due to its complexity and difficulty.The uncertainties in hydrological modeling come from four important aspects:uncertainties in input data and parameters,uncertainties in model structure,uncertainties in analysis method and the initial and boundary conditions.This paper systematically reviewed the recent advances in the study of the uncertainty analysis approaches in the large-scale complex hydrological model on the basis of uncertainty sources.Also,the shortcomings and insufficiencies in the uncertainty analysis for complex hydrological models are pointed out.And then a new uncertainty quantification platform PSUADE and its uncertainty quantification methods were introduced,which will be a powerful tool and platform for uncertainty analysis of large-scale complex hydrological models.Finally,some future perspectives on uncertainty quantification are put forward.
文摘This study investigates the dominant modes of variability in monthly and seasonal rainfall over the India-China region mainly through Empirical Orthogonal Function (EOF) analysis. The EOFs have shown that whereas the rainfall over India varies as one coherent zone, that over China varies in east-west oriented bands. The influence of this banded structure extends well into India.Relationship of rainfall with large scale parameters such as the subtropical ridge over the Indian and the western Pacific regions, Southern Oscillation, the Northern Hemispheric surface air temperature and stratospheric winds have also been investigated. These results show that the rainfall over the area around 40°N, 110°E over China is highly related with rainfall over India. The subtropical ridge over the Indian region is an important predictor over India as well an over the northern China region. '
文摘We present the design and performance of a home-built scanning tunneling microscope (STM), which is compact (66 mm tall and 25 mm in diameter), yet equipped with a 3D atomic precision piezoelectric motor in which the Z coarse approach relies on a high simplic-ity friction-type walker (of our own invention) driven by an axially cut piezoelectric tube. The walker is vertically inserted in a piezoelectric scanner tube (PST) with its brim laying at on the PST end as the inertial slider (driven by the PST) for the XZ (sample plane) motion. The STM is designed to be capable of searching rare microscopic targets (defects, dopants, boundaries, nano-devices, etc.) in a macroscopic sample area (square millimeters) under extreme conditions (low temperatures, strong magnetic elds, etc.) in which it ts. It gives good atomic resolution images after scanning a highly oriented pyrolytic graphite sample in air at room temperature.
文摘We present a deterministic algorithm for large-scale VLSI module placement. Following the less flexibility first (LFF) principle,we simulate a manual packing process in which the concept of placement by stages is introduced to reduce the overall evaluation complexity. The complexity of the proposed algorithm is (N1 + N2 ) × O( n^2 ) + N3× O(n^4lgn) ,where N1, N2 ,and N3 denote the number of modules in each stage, N1 + N2 + N3 = n, and N3〈〈 n. This complexity is much less than the original time complexity of O(n^5lgn). Experimental results indicate that this approach is quite promising.
基金Supported by In part by Florida State University start up fundFlorida State University Research Foundation GAP awardthe partial support from National Science Foundation,No.1342192
文摘Human pluripotent stem cells(hPSCs), including human embryonic stem cells and human induced pluripotent stem cells, are promising sources for hematopoietic cells due to their unlimited growth capacity and the pluripotency. Dendritic cells(DCs), the unique immune cells in the hematopoietic system, can be loaded with tumor specific antigen and used as vaccine for cancer immunotherapy. While autologous DCs from peripheral blood are limited in cell number, hPSC-derived DCs provide a novel alternative cell source which has the potential for large scale production. This review summarizes recent advances in differentiating hPSCs to DCs through the intermediate stage of hematopoietic stem cells. Step-wise growth factor induction has been used to derive DCs from hPSCs either in suspension cultureof embryoid bodies(EBs) or in co-culture with stromal cells. To fulfill the clinical potential of the DCs derived from hPSCs, the bioprocess needs to be scaled up to produce a large number of cells economically under tight quality control. This requires the development of novel bioreactor systems combining guided EB-based differentiation with engineered culture environment. Hence, recent progress in using bioreactors for hPSC lineage-specific differentiation is reviewed. In particular, the potential scale up strategies for the multistage DC differentiation and the effect of shear stress on hPSC differentiation in bioreactors are discussed in detail.
基金National Natural Science Foundation of China(NSFC,Grant No.91213303)
文摘Mycothiol (MSH) is the major low molecular weight thiol in most actinomycetes. Chemical synthesis of MSH is of value for enzymology and inhibitor screening assays, but is hampered by difficulties in large scale sysnthesis. We achieved the total synthesis of MSH by linking 2-camphanoyl-3,4,5,6-tetra-O-benzyl-D-rnyo-inositol (D-1) and 2-deoxy-2-azido-3,4,6-tri-O-benzyl- 1-p-toluene-thio-o-glucoside (2) first, followed by coupling with N-Boc-S-acetyl-L-cysteine (3). This route of synthesis allowed the efficient and convenient synthesis of mycothiol on a large scale.
文摘We study how to use the SR1 update to realize minimization methods for problems where the storage is critical. We give an update formula which generates matrices using information from the last m iterations. The numerical tests show that the method is efficent.
基金supported by the National Natural Science Foundation of China(60603098)
文摘Local diversity AdaBoost support vector machine(LDAB-SVM)is proposed for large scale dataset classification problems.The training dataset is split into several blocks firstly,and some models based on these dataset blocks are built.In order to obtain a better performance,AdaBoost is used in each model building.In the boosting iteration step,the component learners which have higher diversity and accuracy are collected via the kernel parameters adjusting.Then the local models via voting method are integrated.The experimental study shows that LDAB-SVM can deal with large scale dataset efficiently without reducing the performance of the classifier.