The study aims to investigate the financial technology(FinTech)factors influencing Chinese banking performance.Financial expectations and global realities may be changed by FinTech’s multidimensional scope,which is l...The study aims to investigate the financial technology(FinTech)factors influencing Chinese banking performance.Financial expectations and global realities may be changed by FinTech’s multidimensional scope,which is lacking in the traditional financial sector.The use of technology to automate financial services is becoming more important for economic organizations and industries because the digital age has seen a period of transition in terms of consumers and personalization.The future of FinTech will be shaped by technologies like the Internet of Things,blockchain,and artificial intelligence.The involvement of these platforms in financial services is a major concern for global business growth.FinTech is becoming more popular with customers because of such benefits.FinTech has driven a fundamental change within the financial services industry,placing the client at the center of everything.Protection has become a primary focus since data are a component of FinTech transactions.The task of consolidating research reports for consensus is very manual,as there is no standardized format.Although existing research has proposed certain methods,they have certain drawbacks in FinTech payment systems(including cryptocurrencies),credit markets(including peer-to-peer lending),and insurance systems.This paper implements blockchainbased financial technology for the banking sector to overcome these transition issues.In this study,we have proposed an adaptive neuro-fuzzy-based K-nearest neighbors’algorithm.The chaotic improved foraging optimization algorithm is used to optimize the proposed method.The rolling window autoregressive lag modeling approach analyzes FinTech growth.The proposed algorithm is compared with existing approaches to demonstrate its efficiency.The findings showed that it achieved 91%accuracy,90%privacy,96%robustness,and 25%cyber-risk performance.Compared with traditional approaches,the recommended strategy will be more convenient,safe,and effective in the transition period.展开更多
The use of machine learning algorithms to identify characteristics in Distributed Denial of Service (DDoS) attacks has emerged as a powerful approach in cybersecurity. DDoS attacks, which aim to overwhelm a network or...The use of machine learning algorithms to identify characteristics in Distributed Denial of Service (DDoS) attacks has emerged as a powerful approach in cybersecurity. DDoS attacks, which aim to overwhelm a network or service with a flood of malicious traffic, pose significant threats to online systems. Traditional methods of detection and mitigation often struggle to keep pace with the evolving nature of these attacks. Machine learning, with its ability to analyze vast amounts of data and recognize patterns, offers a robust solution to this challenge. The aim of the paper is to demonstrate the application of ensemble ML algorithms, namely the K-Means and the KNN, for a dual clustering mechanism when used with PySpark to collect 99% accurate data. The algorithms, when used together, identify distinctive features of DDoS attacks that prove a very accurate reflection of reality, so they are a good combination for this aim. Impressively, having preprocessed the data, both algorithms with the PySpark foundation enabled the achievement of 99% accuracy when tuned on the features of a DDoS big dataset. The semi-supervised dataset tabulates traffic anomalies in terms of packet size distribution in correlation to Flow Duration. By training the K-Means Clustering and then applying the KNN to the dataset, the algorithms learn to evaluate the character of activity to a greater degree by displaying density with ease. The study evaluates the effectiveness of the K-Means Clustering with the KNN as ensemble algorithms that adapt very well in detecting complex patterns. Ultimately, cross-reaching environmental results indicate that ML-based approaches significantly improve detection rates compared to traditional methods. Furthermore, ensemble learning methods, which combine two plus multiple models to improve prediction accuracy, show greatness in handling the complexity and variability of big data sets especially when implemented by PySpark. The findings suggest that the enhancement of accuracy derives from newer software that’s designed to reflect reality. However, challenges remain in the deployment of these systems, including the need for large, high-quality datasets and the potential for adversarial attacks that attempt to deceive the ML models. Future research should continue to improve the robustness and efficiency of combining algorithms, as well as integrate them with existing security frameworks to provide comprehensive protection against DDoS attacks and other areas. The dataset was originally created by the University of New Brunswick to analyze DDoS data. The dataset itself was based on logs of the university’s servers, which found various DoS attacks throughout the publicly available period to totally generate 80 attributes with a 6.40GB size. In this dataset, the label and binary column become a very important portion of the final classification. In the last column, this means the normal traffic would be differentiated by the attack traffic. Further analysis is then ripe for investigation. Finally, malicious traffic alert software, as an example, should be trained on packet influx to Flow Duration dependence, which creates a mathematical scope for averages to enact. In achieving such high accuracy, the project acts as an illustration (referenced in the form of excerpts from my Google Colab account) of many attempts to tune. Cybersecurity advocates for more work on the character of brute-force attack traffic and normal traffic features overall since most of our investments as humans are digitally based in work, recreational, and social environments.展开更多
Recently,internet users have significantly increased their use of search engines,and market investors are no exception.As a result,predictive models that incorporate scattered web-based information are developing as a...Recently,internet users have significantly increased their use of search engines,and market investors are no exception.As a result,predictive models that incorporate scattered web-based information are developing as an area of forecasting.The objective of this research is to compare the predictive accuracy of fundamental macroeconomic variables,online attention series measured by the Google Trends search volume index,and a combination of both data types for the Mexican,Brazilian,Chilean,and Colombian currencies paired with the USD.The exchange rate series used in this study are sourced from a real-time platform.Four indicators capturing the fundamental macroeconomic differences between these emerging economies and the U.S.from January 2004 to March 2021(monthly)were analyzed.To assess the predictive performance of the KNN algorithm,OLS regression and the random walk with drift model were compared.Considering in-sample predictions,the results generally exhibit lower estimation errors in the random walk with drift model,but in the joint fundamental–online attention data,the KNN and OLS predictions are more accurate than those of the random walk with drift.However,the KNN predictions based on out-of-sample fit generate the lowest estimation errors and the most accurate predictions for the joint fundamental–online attention data.Additionally,performance testing indicates that the KNN extended model outperforms the out-ofsample forecast for the OLS regression and the random walk with drift model.展开更多
In the realm of contemporary artificial intelligence,machine learning enables automation,allowing systems to naturally acquire and enhance their capabilities through learning.In this cycle,Video recommendation is fini...In the realm of contemporary artificial intelligence,machine learning enables automation,allowing systems to naturally acquire and enhance their capabilities through learning.In this cycle,Video recommendation is finished by utilizing machine learning strategies.A suggestion framework is an interaction of data sifting framework,which is utilized to foresee the“rating”or“inclination”given by the different clients.The expectation depends on past evaluations,history,interest,IMDB rating,and so on.This can be carried out by utilizing collective and substance-based separating approaches which utilize the data given by the different clients,examine them,and afterward suggest the video that suits the client at that specific time.The required datasets for the video are taken from Grouplens.This recommender framework is executed by utilizing Python Programming Language.For building this video recommender framework,two calculations are utilized,for example,K-implies Clustering and KNN grouping.K-implies is one of the unaided AI calculations and the fundamental goal is to bunch comparable sort of information focuses together and discover the examples.For that K-implies searches for a steady‘k'of bunches in a dataset.A group is an assortment of information focuses collected due to specific similitudes.K-Nearest Neighbor is an administered learning calculation utilized for characterization,with the given information;KNN can group new information by examination of the‘k'number of the closest information focuses.The last qualities acquired are through bunching qualities and root mean squared mistake,by using this algorithm we can recommend videos more appropriately based on user previous records and ratings.展开更多
Mining from ambiguous data is very important in data mining. This paper discusses one of the tasks for mining from ambiguous data known as multi-instance problem. In multi-instance problem, each pattern is a labeled b...Mining from ambiguous data is very important in data mining. This paper discusses one of the tasks for mining from ambiguous data known as multi-instance problem. In multi-instance problem, each pattern is a labeled bag that consists of a number of unlabeled instances. A bag is negative if all instances in it are negative. A bag is positive if it has at least one positive instance. Because the instances in the positive bag are not labeled, each positive bag is an ambiguous. The mining aim is to classify unseen bags. The main idea of existing multi-instance algorithms is to find true positive instances in positive bags and convert the multi-instance problem to the supervised problem, and get the labels of test bags according to predict the labels of unknown instances. In this paper, we aim at mining the multi-instance data from another point of view, i.e., excluding the false positive instances in positive bags and predicting the label of an entire unknown bag. We propose an algorithm called Multi-Instance Covering kNN (MICkNN) for mining from multi-instance data. Briefly, constructive covering algorithm is utilized to restructure the structure of the original multi-instance data at first. Then, the kNN algorithm is applied to discriminate the false positive instances. In the test stage, we label the tested bag directly according to the similarity between the unseen bag and sphere neighbors obtained from last two steps. Experimental results demonstrate the proposed algorithm is competitive with most of the state-of-the-art multi-instance methods both in classification accuracy and running time.展开更多
The complexity and unpredictability of clear air turbulence(CAT)pose significant challenges to aviation safety.Accurate prediction of turbulence events is crucial for reducing flight accidents and economic losses.Howe...The complexity and unpredictability of clear air turbulence(CAT)pose significant challenges to aviation safety.Accurate prediction of turbulence events is crucial for reducing flight accidents and economic losses.However,traditional turbulence prediction methods,such as ensemble forecasting techniques,have certain limitations:they only consider turbulence data from the most recent period,making it difficult to capture the nonlinear relationships present in turbulence.This study proposes a turbulence forecasting model based on the K-nearest neighbor(KNN)algorithm,which uses a combination of eight CAT diagnostic features as the feature vector and introduces CAT diagnostic feature weights to improve prediction accuracy.The model calculates the results of seven years of CAT diagnostics from 125 to 500 hPa obtained from the ECMWF fifth-generation reanalysis dataset(ERA5)as feature vector inputs and combines them with the labels of Pilot Reports(PIREP)annotated data,where each sample contributes to the prediction result.By measuring the distance between the current CAT diagnostic variable and other variables,the model determines the climatically most similar neighbors and identifies the turbulence intensity category caused by the current variable.To evaluate the model’s performance in diagnosing high-altitude turbulence over Colorado,PIREP cases were randomly selected for analysis.The results show that the weighted KNN(W-KNN)model exhibits higher skill in turbulence prediction,and outperforms traditional prediction methods and other machine learning models(e.g.,Random Forest)in capturing moderate or greater(MOG)level turbulence.The performance of the model was confirmed by evaluating the receiver operating characteristic(ROC)curve,maximum True Skill Statistic(maxTSS=0.552),and reliability plot.A robust score(area under the curve:AUC=0.86)was obtained,and the model demonstrated sensitivity to seasonal and annual climate fluctuations.展开更多
In the process of obtaining information from the actual traffic network, the incomplete data set caused by missing data reduces the validity of the data and the performance of the data-driven model. A traffic flow rep...In the process of obtaining information from the actual traffic network, the incomplete data set caused by missing data reduces the validity of the data and the performance of the data-driven model. A traffic flow repair model based on a k-nearest neighbor(KNN) spatio-temporal attention(STA) graph convolutional network(KAGCN) was proposed in this paper. Firstly, the missing data is initially interpolated by the KNN algorithm, and then the complete index set(CIS) is constructed by combining the adjacency matrix of the network structure. Secondly, a STA mechanism is added to the CIS to capture the spatio-temporal correlation between the data. Then, the graph neural network(GNN) is used to reconstruct the data by spatio-temporal correlation, and the reconstructed data set is used to correct and optimize the initial interpolation data set to obtain the final repair result. Finally, the PEMSD4 data set is used to simulate the missing data in the actual road network, and experiments are carried out under the missing rate of 30%, 50%, and 70% respectively. The results show that the mean absolute error(MAE), root mean square error(RMSE), and mean absolute percentage error(MAPE) of the KAGCN model increased by at least 3.83%, 2.80%, and 5.33%, respectively, compared to the other baseline models at different deletion rates. It proves that the KAGCN model is effective in repairing the missing data of traffic flow.展开更多
Particle size distribution is extremely important in the coal preparation industry.It is traditionally analysed by a manual screening method,which is relatively time-consuming and cannot immediately guide production.I...Particle size distribution is extremely important in the coal preparation industry.It is traditionally analysed by a manual screening method,which is relatively time-consuming and cannot immediately guide production.In this paper,an image segmentation method for images of coal particles is proposed.It employs the watershed algorithm,k-nearest neighbour algorithm,and convex shell method to achieve preliminary segmentation,merge small pieces with large pieces,and split adhered particles,respectively.Comparing the automated segmentation using this method with manual segmentation,it is found that the results are comparable.The size distributions obtained by the automated and manual segmentation methods are nearly identical,and the standard deviation is less than 3%,indicating good reliability.This automated image segmentation method provides a new approach for rapidly analysing the size distribution of coal particles with size fractions defined according to consumer requirements.展开更多
This paper aims to provide an efficient and straightforward structural form-finding method for designers to extrapolate component forms during the conceptual stage.The core idea is to optimize the classical method of ...This paper aims to provide an efficient and straightforward structural form-finding method for designers to extrapolate component forms during the conceptual stage.The core idea is to optimize the classical method of structural form-finding based on principal stress lines by using parametric tools.The traditional operating process of this method relies excessively on the designer’s engineering experience and lacks precision.Meanwhile,the current optimization work for this method is overly complicated for architects,and limitations in component type and final result exist.Therefore,to facilitate an architect’s conceptual work,the optimization metrics of the method in this paper are set as simplicity,practicality,freedom,and rapid feedback.For that reason,this paper optimizes the method from three aspects:modeling strategy for continuum structures,classification processing of data by using the k-nearest neighbor algorithm,and topological form-finding process based on stress lines.Eventually,it allows architects to create structural texture with formal aesthetics and modify it in real time on the basis of structural analysis results.This paper also explores a comprehensive application strategy with internal force analysis diagramming to form-finding.The finite element analysis tool Karamba3D verifies the structural performance of the form-finding method.The performance is compared with that of the conventional form,and the comparison results show the practicality and potential of the strategy in this paper.展开更多
基金from funding agencies in the public,commercial,or not-for-profit sectors.
文摘The study aims to investigate the financial technology(FinTech)factors influencing Chinese banking performance.Financial expectations and global realities may be changed by FinTech’s multidimensional scope,which is lacking in the traditional financial sector.The use of technology to automate financial services is becoming more important for economic organizations and industries because the digital age has seen a period of transition in terms of consumers and personalization.The future of FinTech will be shaped by technologies like the Internet of Things,blockchain,and artificial intelligence.The involvement of these platforms in financial services is a major concern for global business growth.FinTech is becoming more popular with customers because of such benefits.FinTech has driven a fundamental change within the financial services industry,placing the client at the center of everything.Protection has become a primary focus since data are a component of FinTech transactions.The task of consolidating research reports for consensus is very manual,as there is no standardized format.Although existing research has proposed certain methods,they have certain drawbacks in FinTech payment systems(including cryptocurrencies),credit markets(including peer-to-peer lending),and insurance systems.This paper implements blockchainbased financial technology for the banking sector to overcome these transition issues.In this study,we have proposed an adaptive neuro-fuzzy-based K-nearest neighbors’algorithm.The chaotic improved foraging optimization algorithm is used to optimize the proposed method.The rolling window autoregressive lag modeling approach analyzes FinTech growth.The proposed algorithm is compared with existing approaches to demonstrate its efficiency.The findings showed that it achieved 91%accuracy,90%privacy,96%robustness,and 25%cyber-risk performance.Compared with traditional approaches,the recommended strategy will be more convenient,safe,and effective in the transition period.
文摘The use of machine learning algorithms to identify characteristics in Distributed Denial of Service (DDoS) attacks has emerged as a powerful approach in cybersecurity. DDoS attacks, which aim to overwhelm a network or service with a flood of malicious traffic, pose significant threats to online systems. Traditional methods of detection and mitigation often struggle to keep pace with the evolving nature of these attacks. Machine learning, with its ability to analyze vast amounts of data and recognize patterns, offers a robust solution to this challenge. The aim of the paper is to demonstrate the application of ensemble ML algorithms, namely the K-Means and the KNN, for a dual clustering mechanism when used with PySpark to collect 99% accurate data. The algorithms, when used together, identify distinctive features of DDoS attacks that prove a very accurate reflection of reality, so they are a good combination for this aim. Impressively, having preprocessed the data, both algorithms with the PySpark foundation enabled the achievement of 99% accuracy when tuned on the features of a DDoS big dataset. The semi-supervised dataset tabulates traffic anomalies in terms of packet size distribution in correlation to Flow Duration. By training the K-Means Clustering and then applying the KNN to the dataset, the algorithms learn to evaluate the character of activity to a greater degree by displaying density with ease. The study evaluates the effectiveness of the K-Means Clustering with the KNN as ensemble algorithms that adapt very well in detecting complex patterns. Ultimately, cross-reaching environmental results indicate that ML-based approaches significantly improve detection rates compared to traditional methods. Furthermore, ensemble learning methods, which combine two plus multiple models to improve prediction accuracy, show greatness in handling the complexity and variability of big data sets especially when implemented by PySpark. The findings suggest that the enhancement of accuracy derives from newer software that’s designed to reflect reality. However, challenges remain in the deployment of these systems, including the need for large, high-quality datasets and the potential for adversarial attacks that attempt to deceive the ML models. Future research should continue to improve the robustness and efficiency of combining algorithms, as well as integrate them with existing security frameworks to provide comprehensive protection against DDoS attacks and other areas. The dataset was originally created by the University of New Brunswick to analyze DDoS data. The dataset itself was based on logs of the university’s servers, which found various DoS attacks throughout the publicly available period to totally generate 80 attributes with a 6.40GB size. In this dataset, the label and binary column become a very important portion of the final classification. In the last column, this means the normal traffic would be differentiated by the attack traffic. Further analysis is then ripe for investigation. Finally, malicious traffic alert software, as an example, should be trained on packet influx to Flow Duration dependence, which creates a mathematical scope for averages to enact. In achieving such high accuracy, the project acts as an illustration (referenced in the form of excerpts from my Google Colab account) of many attempts to tune. Cybersecurity advocates for more work on the character of brute-force attack traffic and normal traffic features overall since most of our investments as humans are digitally based in work, recreational, and social environments.
基金“Peso-Dollar Exchange Rate Prediction:Fundamentals vs.Internet Search Indicators”,which was sponsored by the Universidad Autonoma de Nuevo Leon,Mexico(PAICYT Project#375-CSA-2022).
文摘Recently,internet users have significantly increased their use of search engines,and market investors are no exception.As a result,predictive models that incorporate scattered web-based information are developing as an area of forecasting.The objective of this research is to compare the predictive accuracy of fundamental macroeconomic variables,online attention series measured by the Google Trends search volume index,and a combination of both data types for the Mexican,Brazilian,Chilean,and Colombian currencies paired with the USD.The exchange rate series used in this study are sourced from a real-time platform.Four indicators capturing the fundamental macroeconomic differences between these emerging economies and the U.S.from January 2004 to March 2021(monthly)were analyzed.To assess the predictive performance of the KNN algorithm,OLS regression and the random walk with drift model were compared.Considering in-sample predictions,the results generally exhibit lower estimation errors in the random walk with drift model,but in the joint fundamental–online attention data,the KNN and OLS predictions are more accurate than those of the random walk with drift.However,the KNN predictions based on out-of-sample fit generate the lowest estimation errors and the most accurate predictions for the joint fundamental–online attention data.Additionally,performance testing indicates that the KNN extended model outperforms the out-ofsample forecast for the OLS regression and the random walk with drift model.
文摘In the realm of contemporary artificial intelligence,machine learning enables automation,allowing systems to naturally acquire and enhance their capabilities through learning.In this cycle,Video recommendation is finished by utilizing machine learning strategies.A suggestion framework is an interaction of data sifting framework,which is utilized to foresee the“rating”or“inclination”given by the different clients.The expectation depends on past evaluations,history,interest,IMDB rating,and so on.This can be carried out by utilizing collective and substance-based separating approaches which utilize the data given by the different clients,examine them,and afterward suggest the video that suits the client at that specific time.The required datasets for the video are taken from Grouplens.This recommender framework is executed by utilizing Python Programming Language.For building this video recommender framework,two calculations are utilized,for example,K-implies Clustering and KNN grouping.K-implies is one of the unaided AI calculations and the fundamental goal is to bunch comparable sort of information focuses together and discover the examples.For that K-implies searches for a steady‘k'of bunches in a dataset.A group is an assortment of information focuses collected due to specific similitudes.K-Nearest Neighbor is an administered learning calculation utilized for characterization,with the given information;KNN can group new information by examination of the‘k'number of the closest information focuses.The last qualities acquired are through bunching qualities and root mean squared mistake,by using this algorithm we can recommend videos more appropriately based on user previous records and ratings.
基金the National Natural Science Foundation of China (Nos. 61073117 and 61175046)the Provincial Natural Science Research Program of Higher Education Institutions of Anhui Province (No. KJ2013A016)+1 种基金the Academic Innovative Research Projects of Anhui University Graduate Students (No. 10117700183)the 211 Project of Anhui University
文摘Mining from ambiguous data is very important in data mining. This paper discusses one of the tasks for mining from ambiguous data known as multi-instance problem. In multi-instance problem, each pattern is a labeled bag that consists of a number of unlabeled instances. A bag is negative if all instances in it are negative. A bag is positive if it has at least one positive instance. Because the instances in the positive bag are not labeled, each positive bag is an ambiguous. The mining aim is to classify unseen bags. The main idea of existing multi-instance algorithms is to find true positive instances in positive bags and convert the multi-instance problem to the supervised problem, and get the labels of test bags according to predict the labels of unknown instances. In this paper, we aim at mining the multi-instance data from another point of view, i.e., excluding the false positive instances in positive bags and predicting the label of an entire unknown bag. We propose an algorithm called Multi-Instance Covering kNN (MICkNN) for mining from multi-instance data. Briefly, constructive covering algorithm is utilized to restructure the structure of the original multi-instance data at first. Then, the kNN algorithm is applied to discriminate the false positive instances. In the test stage, we label the tested bag directly according to the similarity between the unseen bag and sphere neighbors obtained from last two steps. Experimental results demonstrate the proposed algorithm is competitive with most of the state-of-the-art multi-instance methods both in classification accuracy and running time.
基金Supported by the Nanjing University of Aeronautics and Astronautics(KFB2305601).
文摘The complexity and unpredictability of clear air turbulence(CAT)pose significant challenges to aviation safety.Accurate prediction of turbulence events is crucial for reducing flight accidents and economic losses.However,traditional turbulence prediction methods,such as ensemble forecasting techniques,have certain limitations:they only consider turbulence data from the most recent period,making it difficult to capture the nonlinear relationships present in turbulence.This study proposes a turbulence forecasting model based on the K-nearest neighbor(KNN)algorithm,which uses a combination of eight CAT diagnostic features as the feature vector and introduces CAT diagnostic feature weights to improve prediction accuracy.The model calculates the results of seven years of CAT diagnostics from 125 to 500 hPa obtained from the ECMWF fifth-generation reanalysis dataset(ERA5)as feature vector inputs and combines them with the labels of Pilot Reports(PIREP)annotated data,where each sample contributes to the prediction result.By measuring the distance between the current CAT diagnostic variable and other variables,the model determines the climatically most similar neighbors and identifies the turbulence intensity category caused by the current variable.To evaluate the model’s performance in diagnosing high-altitude turbulence over Colorado,PIREP cases were randomly selected for analysis.The results show that the weighted KNN(W-KNN)model exhibits higher skill in turbulence prediction,and outperforms traditional prediction methods and other machine learning models(e.g.,Random Forest)in capturing moderate or greater(MOG)level turbulence.The performance of the model was confirmed by evaluating the receiver operating characteristic(ROC)curve,maximum True Skill Statistic(maxTSS=0.552),and reliability plot.A robust score(area under the curve:AUC=0.86)was obtained,and the model demonstrated sensitivity to seasonal and annual climate fluctuations.
基金supported by the National Natural Science Foundation of China (62162040)the Gansu Provincial Science and Technology Plan Funding Key Project of Natural Science Foundation of China (22JR5RA226)+1 种基金the Gansu Province Higher Education Innovation Fund-Funded Project (2021A-028)the Gansu Provincial Science and Technology Program Funding Project (21ZD4GA028)。
文摘In the process of obtaining information from the actual traffic network, the incomplete data set caused by missing data reduces the validity of the data and the performance of the data-driven model. A traffic flow repair model based on a k-nearest neighbor(KNN) spatio-temporal attention(STA) graph convolutional network(KAGCN) was proposed in this paper. Firstly, the missing data is initially interpolated by the KNN algorithm, and then the complete index set(CIS) is constructed by combining the adjacency matrix of the network structure. Secondly, a STA mechanism is added to the CIS to capture the spatio-temporal correlation between the data. Then, the graph neural network(GNN) is used to reconstruct the data by spatio-temporal correlation, and the reconstructed data set is used to correct and optimize the initial interpolation data set to obtain the final repair result. Finally, the PEMSD4 data set is used to simulate the missing data in the actual road network, and experiments are carried out under the missing rate of 30%, 50%, and 70% respectively. The results show that the mean absolute error(MAE), root mean square error(RMSE), and mean absolute percentage error(MAPE) of the KAGCN model increased by at least 3.83%, 2.80%, and 5.33%, respectively, compared to the other baseline models at different deletion rates. It proves that the KAGCN model is effective in repairing the missing data of traffic flow.
文摘Particle size distribution is extremely important in the coal preparation industry.It is traditionally analysed by a manual screening method,which is relatively time-consuming and cannot immediately guide production.In this paper,an image segmentation method for images of coal particles is proposed.It employs the watershed algorithm,k-nearest neighbour algorithm,and convex shell method to achieve preliminary segmentation,merge small pieces with large pieces,and split adhered particles,respectively.Comparing the automated segmentation using this method with manual segmentation,it is found that the results are comparable.The size distributions obtained by the automated and manual segmentation methods are nearly identical,and the standard deviation is less than 3%,indicating good reliability.This automated image segmentation method provides a new approach for rapidly analysing the size distribution of coal particles with size fractions defined according to consumer requirements.
文摘This paper aims to provide an efficient and straightforward structural form-finding method for designers to extrapolate component forms during the conceptual stage.The core idea is to optimize the classical method of structural form-finding based on principal stress lines by using parametric tools.The traditional operating process of this method relies excessively on the designer’s engineering experience and lacks precision.Meanwhile,the current optimization work for this method is overly complicated for architects,and limitations in component type and final result exist.Therefore,to facilitate an architect’s conceptual work,the optimization metrics of the method in this paper are set as simplicity,practicality,freedom,and rapid feedback.For that reason,this paper optimizes the method from three aspects:modeling strategy for continuum structures,classification processing of data by using the k-nearest neighbor algorithm,and topological form-finding process based on stress lines.Eventually,it allows architects to create structural texture with formal aesthetics and modify it in real time on the basis of structural analysis results.This paper also explores a comprehensive application strategy with internal force analysis diagramming to form-finding.The finite element analysis tool Karamba3D verifies the structural performance of the form-finding method.The performance is compared with that of the conventional form,and the comparison results show the practicality and potential of the strategy in this paper.