期刊文献+
共找到381,549篇文章
< 1 2 250 >
每页显示 20 50 100
Research on the Influence of Financial Status of Benxi Steel Sheet Material on Stock Price Under the Perspective of Big Data
1
作者 Rui Gao Wenli Bao +2 位作者 Fei Xu Junchi Liu Meihang Li 《Proceedings of Business and Economic Studies》 2025年第6期68-73,共6页
Based on the financial data and stock price information of Bengang Steel Plates Co.Ltd.from 2004 to 2023,this paper uses SPSS 26 software,combined with DuPont Analysis and Wall Score Method,to explore the correlation ... Based on the financial data and stock price information of Bengang Steel Plates Co.Ltd.from 2004 to 2023,this paper uses SPSS 26 software,combined with DuPont Analysis and Wall Score Method,to explore the correlation between stock price and nine key financial indicators selected from three dimensions:profitability,development capability,and operating capability,including fixed asset growth rate,price-to-book ratio(P/B ratio),and gross profit margin.Through correlation analysis,multiple regression analysis,and curve fitting,the study finds that:fixed asset growth rate,P/B ratio,and gross profit margin show a significant positive correlation with stock price;return on equity(ROE),operating income,and accounts receivable turnover days show a significant negative correlation with stock price;earnings per share(EPS)and net profit growth rate do not show a significant correlation with stock price.The research results indicate that the stock price of Bengang Steel Plates Co.Ltd.is greatly affected by its asset scale and market valuation,while some profitability indicators have not been effectively transmitted to the stock price.Finally,countermeasures and suggestions are put forward from the aspects of cost control,technological innovation,market expansion,and financial structure optimization,so as to provide references for corporate operation and investment decisions. 展开更多
关键词 Bengang Steel Plates Co.Ltd. Financial indicators stock price impact
在线阅读 下载PDF
Short-Term Spillover Effects in High-order Moments of Stocks, Foreign Currency Exchange and Bitcoin with Intraday Data
2
作者 Xinying He 《Proceedings of Business and Economic Studies》 2025年第3期172-181,共10页
This paper employs Granger causality analysis and the generalized impulse response function(GIRF)to study the higher-order moment spillover effects among Bitcoin,stock markets,and foreign exchange markets in the U.S.U... This paper employs Granger causality analysis and the generalized impulse response function(GIRF)to study the higher-order moment spillover effects among Bitcoin,stock markets,and foreign exchange markets in the U.S.Using intraday high-frequency data,the research focuses on the interactions across higher-order moments,including volatility,jumps,skewness,and kurtosis.The results reveal significant bidirectional spillover effects between Bitcoin and traditional financial assets,particularly in terms of volatility and jump behavior,indicating that the cryptocurrency market has become a crucial component of global financial risk transmission.This study provides new theoretical perspectives and policy recommendations for global asset allocation,market regulation,and risk management,underscoring the importance of proactive management measures in addressing the complex risk interactions between cryptocurrencies and traditional financial markets. 展开更多
关键词 Higher-order moments Intraday data Spillover effects Bitcoin Risk management
在线阅读 下载PDF
A Composite Loss-Based Autoencoder for Accurate and Scalable Missing Data Imputation
3
作者 Thierry Mugenzi Cahit Perkgoz 《Computers, Materials & Continua》 2026年第1期1985-2005,共21页
Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel a... Missing data presents a crucial challenge in data analysis,especially in high-dimensional datasets,where missing data often leads to biased conclusions and degraded model performance.In this study,we present a novel autoencoder-based imputation framework that integrates a composite loss function to enhance robustness and precision.The proposed loss combines(i)a guided,masked mean squared error focusing on missing entries;(ii)a noise-aware regularization term to improve resilience against data corruption;and(iii)a variance penalty to encourage expressive yet stable reconstructions.We evaluate the proposed model across four missingness mechanisms,such as Missing Completely at Random,Missing at Random,Missing Not at Random,and Missing Not at Random with quantile censorship,under systematically varied feature counts,sample sizes,and missingness ratios ranging from 5%to 60%.Four publicly available real-world datasets(Stroke Prediction,Pima Indians Diabetes,Cardiovascular Disease,and Framingham Heart Study)were used,and the obtained results show that our proposed model consistently outperforms baseline methods,including traditional and deep learning-based techniques.An ablation study reveals the additive value of each component in the loss function.Additionally,we assessed the downstream utility of imputed data through classification tasks,where datasets imputed by the proposed method yielded the highest receiver operating characteristic area under the curve scores across all scenarios.The model demonstrates strong scalability and robustness,improving performance with larger datasets and higher feature counts.These results underscore the capacity of the proposed method to produce not only numerically accurate but also semantically useful imputations,making it a promising solution for robust data recovery in clinical applications. 展开更多
关键词 Missing data imputation autoencoder deep learning missing mechanisms
在线阅读 下载PDF
Advances in Machine Learning for Explainable Intrusion Detection Using Imbalance Datasets in Cybersecurity with Harris Hawks Optimization
4
作者 Amjad Rehman Tanzila Saba +2 位作者 Mona M.Jamjoom Shaha Al-Otaibi Muhammad I.Khan 《Computers, Materials & Continua》 2026年第1期1804-1818,共15页
Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness a... Modern intrusion detection systems(MIDS)face persistent challenges in coping with the rapid evolution of cyber threats,high-volume network traffic,and imbalanced datasets.Traditional models often lack the robustness and explainability required to detect novel and sophisticated attacks effectively.This study introduces an advanced,explainable machine learning framework for multi-class IDS using the KDD99 and IDS datasets,which reflects real-world network behavior through a blend of normal and diverse attack classes.The methodology begins with sophisticated data preprocessing,incorporating both RobustScaler and QuantileTransformer to address outliers and skewed feature distributions,ensuring standardized and model-ready inputs.Critical dimensionality reduction is achieved via the Harris Hawks Optimization(HHO)algorithm—a nature-inspired metaheuristic modeled on hawks’hunting strategies.HHO efficiently identifies the most informative features by optimizing a fitness function based on classification performance.Following feature selection,the SMOTE is applied to the training data to resolve class imbalance by synthetically augmenting underrepresented attack types.The stacked architecture is then employed,combining the strengths of XGBoost,SVM,and RF as base learners.This layered approach improves prediction robustness and generalization by balancing bias and variance across diverse classifiers.The model was evaluated using standard classification metrics:precision,recall,F1-score,and overall accuracy.The best overall performance was recorded with an accuracy of 99.44%for UNSW-NB15,demonstrating the model’s effectiveness.After balancing,the model demonstrated a clear improvement in detecting the attacks.We tested the model on four datasets to show the effectiveness of the proposed approach and performed the ablation study to check the effect of each parameter.Also,the proposed model is computationaly efficient.To support transparency and trust in decision-making,explainable AI(XAI)techniques are incorporated that provides both global and local insight into feature contributions,and offers intuitive visualizations for individual predictions.This makes it suitable for practical deployment in cybersecurity environments that demand both precision and accountability. 展开更多
关键词 Intrusion detection XAI machine learning ensemble method CYBERSECURITY imbalance data
在线阅读 下载PDF
Estimation of Soil Organic Carbon Stocks Utilizing Machine Learning Algorithms and Multi-source Geospatial Data in Coastal Wetlands of Tianjin and Hebei,China
5
作者 YANG Rui LIU Mingyue +10 位作者 ZHANG Yongbin MAN Weidong TONG Jingfen LIU Dong ZHANG Qingwen KOU Caiyao LI Xiang LIU Yahui TIAN Di YIN Xuan HE Jiannan 《Chinese Geographical Science》 2025年第4期707-721,I0003,共16页
Coastal wetlands are crucial for the‘blue carbon sink’,significantly contributing to regulating climate change.This study util-ized 160 soil samples,35 remote sensing features,and 5 geo-climatic data to accurately e... Coastal wetlands are crucial for the‘blue carbon sink’,significantly contributing to regulating climate change.This study util-ized 160 soil samples,35 remote sensing features,and 5 geo-climatic data to accurately estimate the soil organic carbon stocks(SOCS)in the coastal wetlands of Tianjin and Hebei,China.To reduce data redundancy,simplify model complexity,and improve model inter-pretability,Pearson correlation analysis(PsCA),Boruta,and recursive feature elimination(RFE)were employed to optimize features.Combined with the optimized features,the soil organic carbon density(SOCD)prediction model was constructed by using multivariate adaptive regression splines(MARS),extreme gradient boosting(XGBoost),and random forest(RF)algorithms and applied to predict the spatial distribution of SOCD and estimate the SOCS of different wetland types in 2020.The results show that:1)different feature combinations have a significant influence on the model performance.Better prediction performance was attained by building a model using RFE-based feature combinations.RF has the best prediction accuracy(R^(2)=0.587,RMSE=0.798 kg/m^(2),MAE=0.660 kg/m^(2)).2)Optical features are more important than radar and geo-climatic features in the MARS,XGBoost,and RF algorithms.3)The size of SOCS is related to SOCD and the area of each wetland type,aquaculture pond has the highest SOCS,followed by marsh,salt pan,mud-flat,and sand shore. 展开更多
关键词 soil organic carbon stocks(SOCS) soil organic carbon density(SOCD) multivariate adaptive regression spline(MARS) extreme gradient boosting(XGBoost) random forest(RF) residual kriging(RK) feature optimization coastal wetlands Tianjin and Hebei China
在线阅读 下载PDF
Enhanced Capacity Reversible Data Hiding Based on Pixel Value Ordering in Triple Stego Images
6
作者 Kim Sao Nguyen Ngoc Dung Bui 《Computers, Materials & Continua》 2026年第1期1571-1586,共16页
Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi... Reversible data hiding(RDH)enables secret data embedding while preserving complete cover image recovery,making it crucial for applications requiring image integrity.The pixel value ordering(PVO)technique used in multi-stego images provides good image quality but often results in low embedding capability.To address these challenges,this paper proposes a high-capacity RDH scheme based on PVO that generates three stego images from a single cover image.The cover image is partitioned into non-overlapping blocks with pixels sorted in ascending order.Four secret bits are embedded into each block’s maximum pixel value,while three additional bits are embedded into the second-largest value when the pixel difference exceeds a predefined threshold.A similar embedding strategy is also applied to the minimum side of the block,including the second-smallest pixel value.This design enables each block to embed up to 14 bits of secret data.Experimental results demonstrate that the proposed method achieves significantly higher embedding capacity and improved visual quality compared to existing triple-stego RDH approaches,advancing the field of reversible steganography. 展开更多
关键词 RDH reversible data hiding PVO RDH base three stego images
在线阅读 下载PDF
Impact of Data Processing Techniques on AI Models for Attack-Based Imbalanced and Encrypted Traffic within IoT Environments
7
作者 Yeasul Kim Chaeeun Won Hwankuk Kim 《Computers, Materials & Continua》 2026年第1期247-274,共28页
With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comp... With the increasing emphasis on personal information protection,encryption through security protocols has emerged as a critical requirement in data transmission and reception processes.Nevertheless,IoT ecosystems comprise heterogeneous networks where outdated systems coexist with the latest devices,spanning a range of devices from non-encrypted ones to fully encrypted ones.Given the limited visibility into payloads in this context,this study investigates AI-based attack detection methods that leverage encrypted traffic metadata,eliminating the need for decryption and minimizing system performance degradation—especially in light of these heterogeneous devices.Using the UNSW-NB15 and CICIoT-2023 dataset,encrypted and unencrypted traffic were categorized according to security protocol,and AI-based intrusion detection experiments were conducted for each traffic type based on metadata.To mitigate the problem of class imbalance,eight different data sampling techniques were applied.The effectiveness of these sampling techniques was then comparatively analyzed using two ensemble models and three Deep Learning(DL)models from various perspectives.The experimental results confirmed that metadata-based attack detection is feasible using only encrypted traffic.In the UNSW-NB15 dataset,the f1-score of encrypted traffic was approximately 0.98,which is 4.3%higher than that of unencrypted traffic(approximately 0.94).In addition,analysis of the encrypted traffic in the CICIoT-2023 dataset using the same method showed a significantly lower f1-score of roughly 0.43,indicating that the quality of the dataset and the preprocessing approach have a substantial impact on detection performance.Furthermore,when data sampling techniques were applied to encrypted traffic,the recall in the UNSWNB15(Encrypted)dataset improved by up to 23.0%,and in the CICIoT-2023(Encrypted)dataset by 20.26%,showing a similar level of improvement.Notably,in CICIoT-2023,f1-score and Receiver Operation Characteristic-Area Under the Curve(ROC-AUC)increased by 59.0%and 55.94%,respectively.These results suggest that data sampling can have a positive effect even in encrypted environments.However,the extent of the improvement may vary depending on data quality,model architecture,and sampling strategy. 展开更多
关键词 Encrypted traffic attack detection data sampling technique AI-based detection IoT environment
在线阅读 下载PDF
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
8
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Individual Software Expertise Formalization and Assessment from Project Management Tool Databases
9
作者 Traian-Radu Plosca Alexandru-Mihai Pescaru +1 位作者 Bianca-Valeria Rus Daniel-Ioan Curiac 《Computers, Materials & Continua》 2026年第1期389-411,共23页
Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods... Objective expertise evaluation of individuals,as a prerequisite stage for team formation,has been a long-term desideratum in large software development companies.With the rapid advancements in machine learning methods,based on reliable existing data stored in project management tools’datasets,automating this evaluation process becomes a natural step forward.In this context,our approach focuses on quantifying software developer expertise by using metadata from the task-tracking systems.For this,we mathematically formalize two categories of expertise:technology-specific expertise,which denotes the skills required for a particular technology,and general expertise,which encapsulates overall knowledge in the software industry.Afterward,we automatically classify the zones of expertise associated with each task a developer has worked on using Bidirectional Encoder Representations from Transformers(BERT)-like transformers to handle the unique characteristics of project tool datasets effectively.Finally,our method evaluates the proficiency of each software specialist across already completed projects from both technology-specific and general perspectives.The method was experimentally validated,yielding promising results. 展开更多
关键词 Expertise formalization transformer-based models natural language processing augmented data project management tool skill classification
在线阅读 下载PDF
A Convolutional Neural Network-Based Deep Support Vector Machine for Parkinson’s Disease Detection with Small-Scale and Imbalanced Datasets
10
作者 Kwok Tai Chui Varsha Arya +2 位作者 Brij B.Gupta Miguel Torres-Ruiz Razaz Waheeb Attar 《Computers, Materials & Continua》 2026年第1期1410-1432,共23页
Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using d... Parkinson’s disease(PD)is a debilitating neurological disorder affecting over 10 million people worldwide.PD classification models using voice signals as input are common in the literature.It is believed that using deep learning algorithms further enhances performance;nevertheless,it is challenging due to the nature of small-scale and imbalanced PD datasets.This paper proposed a convolutional neural network-based deep support vector machine(CNN-DSVM)to automate the feature extraction process using CNN and extend the conventional SVM to a DSVM for better classification performance in small-scale PD datasets.A customized kernel function reduces the impact of biased classification towards the majority class(healthy candidates in our consideration).An improved generative adversarial network(IGAN)was designed to generate additional training data to enhance the model’s performance.For performance evaluation,the proposed algorithm achieves a sensitivity of 97.6%and a specificity of 97.3%.The performance comparison is evaluated from five perspectives,including comparisons with different data generation algorithms,feature extraction techniques,kernel functions,and existing works.Results reveal the effectiveness of the IGAN algorithm,which improves the sensitivity and specificity by 4.05%–4.72%and 4.96%–5.86%,respectively;and the effectiveness of the CNN-DSVM algorithm,which improves the sensitivity by 1.24%–57.4%and specificity by 1.04%–163%and reduces biased detection towards the majority class.The ablation experiments confirm the effectiveness of individual components.Two future research directions have also been suggested. 展开更多
关键词 Convolutional neural network data generation deep support vector machine feature extraction generative artificial intelligence imbalanced dataset medical diagnosis Parkinson’s disease small-scale dataset
在线阅读 下载PDF
Application of a Bayesian method to data-poor stock assessment by using Indian Ocean albacore (Thunnus alalunga) stock assessment as an example 被引量:15
11
作者 GUAN Wenjiang TANG Lin +2 位作者 ZHU Jiangfeng TIAN Siquan XU Liuxiong 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2016年第2期117-125,共9页
It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in dat... It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in data-poor situations through borrowing strength from prior information deduced from species with good-quality data or other known information. Because there is considerable uncertainty remaining in the stock assessment of albacore tuna(Thunnus alalunga) in the Indian Ocean due to the limited and low-quality data, we investigate the advantages of a Bayesian method in data-poor stock assessment by using Indian Ocean albacore stock assessment as an example. Eight Bayesian biomass dynamics models with different prior assumptions and catch data series were developed to assess the stock. The results show(1) the rationality of choice of catch data series and assumption of parameters could be enhanced by analyzing the posterior distribution of the parameters;(2) the reliability of the stock assessment could be improved by using demographic methods to construct a prior for the intrinsic rate of increase(r). Because we can make use of more information to improve the rationality of parameter estimation and the reliability of the stock assessment compared with traditional statistical methods by incorporating any available knowledge into the informative priors and analyzing the posterior distribution based on Bayesian framework in data-poor situations, we suggest that the Bayesian method should be an alternative method to be applied in data-poor species stock assessment, such as Indian Ocean albacore. 展开更多
关键词 data-poor stock assessment Bayesian method catch data series demographic method Indian Ocean Thunnus alalunga
在线阅读 下载PDF
Forecasting Method of Stock Market Volatility in Time Series Data Based on Mixed Model of ARIMA and XGBoost 被引量:17
12
作者 Yan Wang Yuankai Guo 《China Communications》 SCIE CSCD 2020年第3期205-221,共17页
Stock price forecasting is an important issue and interesting topic in financial markets.Because reasonable and accurate forecasts have the potential to generate high economic benefits,many researchers have been invol... Stock price forecasting is an important issue and interesting topic in financial markets.Because reasonable and accurate forecasts have the potential to generate high economic benefits,many researchers have been involved in the study of stock price forecasts.In this paper,the DWT-ARIMAGSXGB hybrid model is proposed.Firstly,the discrete wavelet transform is used to split the data set into approximation and error parts.Then the ARIMA(0,1,1),ARIMA(1,1,0),ARIMA(2,1,1)and ARIMA(3,1,0)models respectively process approximate partial data and the improved xgboost model(GSXGB)handles error partial data.Finally,the prediction results are combined using wavelet reconstruction.According to the experimental comparison of 10 stock data sets,it is found that the errors of DWT-ARIMA-GSXGB model are less than the four prediction models of ARIMA,XGBoost,GSXGB and DWT-ARIMA-XGBoost.The simulation results show that the DWT-ARIMA-GSXGB stock price prediction model has good approximation ability and generalization ability,and can fit the stock index opening price well.And the proposed model is considered to greatly improve the predictive performance of a single ARIMA model or a single XGBoost model in predicting stock prices. 展开更多
关键词 hybrid model discrete WAVELET TRANSFORM ARIMA XGBoost grid search stock PRICE FORECAST
在线阅读 下载PDF
Using Data Mining with Time Series Data in Short-Term Stocks Prediction: A Literature Review 被引量:3
13
作者 José Manuel Azevedo Rui Almeida Pedro Almeida 《International Journal of Intelligence Science》 2012年第4期176-180,共5页
Data Mining (DM) methods are being increasingly used in prediction with time series data, in addition to traditional statistical approaches. This paper presents a literature review of the use of DM with time series da... Data Mining (DM) methods are being increasingly used in prediction with time series data, in addition to traditional statistical approaches. This paper presents a literature review of the use of DM with time series data, focusing on shorttime stocks prediction. This is an area that has been attracting a great deal of attention from researchers in the field. The main contribution of this paper is to provide an outline of the use of DM with time series data, using mainly examples related with short-term stocks prediction. This is important to a better understanding of the field. Some of the main trends and open issues will also be introduced. 展开更多
关键词 data MINING Time Series FUNDAMENTAL data data Frequency Application Domain SHORT-TERM stocks PREDICTION
暂未订购
Indian stock market prediction using artificial neural networks on tick data 被引量:2
14
作者 Dharmaraja Selvamuthu Vineet Kumar Abhishek Mishra 《Financial Innovation》 2019年第1期267-278,共12页
Introduction:Nowadays,the most significant challenges in the stock market is to predict the stock prices.The stock price data represents a financial time series data which becomes more difficult to predict due to its ... Introduction:Nowadays,the most significant challenges in the stock market is to predict the stock prices.The stock price data represents a financial time series data which becomes more difficult to predict due to its characteristics and dynamic nature.Case description:Support Vector Machines(SVM)and Artificial Neural Networks(ANN)are widely used for prediction of stock prices and its movements.Every algorithm has its way of learning patterns and then predicting.Artificial Neural Network(ANN)is a popular method which also incorporate technical analysis for making predictions in financial markets.Discussion and evaluation:Most common techniques used in the forecasting of financial time series are Support Vector Machine(SVM),Support Vector Regression(SVR)and Back Propagation Neural Network(BPNN).In this article,we use neural networks based on three different learning algorithms,i.e.,Levenberg-Marquardt,Scaled Conjugate Gradient and Bayesian Regularization for stock market prediction based on tick data as well as 15-min data of an Indian company and their results compared.Conclusion:All three algorithms provide an accuracy of 99.9%using tick data.The accuracy over 15-min dataset drops to 96.2%,97.0%and 98.9%for LM,SCG and Bayesian Regularization respectively which is significantly poor in comparison with that of results obtained using tick data. 展开更多
关键词 Neural Networks Indian stock Market Prediction LEVENBERG-MARQUARDT Scale Conjugate Gradient Bayesian Regularization Tick by tick data
在线阅读 下载PDF
Social Media and Stock Market Prediction: A Big Data Approach 被引量:1
15
作者 Mazhar Javed Awan Mohd Shafry Mohd Rahim +3 位作者 Haitham Nobanee Ashna Munawar Awais Yasin Azlan Mohd Zain Azlanmz 《Computers, Materials & Continua》 SCIE EI 2021年第5期2569-2583,共15页
Big data is the collection of large datasets from traditional and digital sources to identify trends and patterns.The quantity and variety of computer data are growing exponentially for many reasons.For example,retail... Big data is the collection of large datasets from traditional and digital sources to identify trends and patterns.The quantity and variety of computer data are growing exponentially for many reasons.For example,retailers are building vast databases of customer sales activity.Organizations are working on logistics financial services,and public social media are sharing a vast quantity of sentiments related to sales price and products.Challenges of big data include volume and variety in both structured and unstructured data.In this paper,we implemented several machine learning models through Spark MLlib using PySpark,which is scalable,fast,easily integrated with other tools,and has better performance than the traditional models.We studied the stocks of 10 top companies,whose data include historical stock prices,with MLlib models such as linear regression,generalized linear regression,random forest,and decision tree.We implemented naive Bayes and logistic regression classification models.Experimental results suggest that linear regression,random forest,and generalized linear regression provide an accuracy of 80%-98%.The experimental results of the decision tree did not well predict share price movements in the stock market. 展开更多
关键词 Big data ANALYTICS artificial intelligence machine learning stock market social media business analytics
在线阅读 下载PDF
Evaluation of removal of the size effect using data scaling and elliptic Fourier descriptors in otolith shape analysis, exemplified by the discrimination of two yellow croaker stocks along the Chinese coast 被引量:1
16
作者 赵博 刘金虎 +2 位作者 宋骏杰 曹亮 窦硕增 《Chinese Journal of Oceanology and Limnology》 SCIE CAS CSCD 2017年第6期1482-1492,共11页
Removal of the length ef fect in otolith shape analysis for stock identification using length scaling is an important issue; however, few studies have attempted to investigate the ef fectiveness or weakness of this me... Removal of the length ef fect in otolith shape analysis for stock identification using length scaling is an important issue; however, few studies have attempted to investigate the ef fectiveness or weakness of this methodology in application. The aim of this study was to evaluate whether commonly used size scaling methods and normalized elliptic Fourier descriptors(NEFDs) could ef fectively remove the size ef fect of fish in stock discrimination. To achieve this goal, length groups from two known geographical stocks of yellow croaker, L arimichthys polyactis, along the Chinese coast(five groups from the Changjiang River estuary of the East China Sea and three groups from the Bohai Sea) were subjected to otolith shape analysis. The results indicated that the variation of otolith shape caused by intra-stock fish length might exceed that due to inter-stock geographical separation, even when otolith shape variables are standardized with length scaling methods. This variation could easily result in misleading stock discrimination through otolith shape analysis. Therefore, conclusions about fish stock structure should be carefully drawn from otolith shape analysis because the observed discrimination may primarily be due to length ef fects, rather than dif ferences among stocks. The application of multiple methods, such as otoliths shape analysis combined with elemental fingering, tagging or genetic analysis, is recommended for sock identification. 展开更多
关键词 otolith shape analysis data scaling for fish length stock discrimination removal of length effect
原文传递
Impact and prediction of pollutant on mangrove and carbon stocks:A machine learning study based on urban remote sensing data 被引量:2
17
作者 Mengjie Xu Chuanwang Sun +1 位作者 Yanhong Zhan Ye Liu 《Geoscience Frontiers》 SCIE CAS CSCD 2024年第3期401-418,共18页
Mangrove ecosystems have important ecological and economic values,especially their ability to store carbon.However,in recent years,human disturbance has accelerated mangrove degradation.Among them,the emission of poll... Mangrove ecosystems have important ecological and economic values,especially their ability to store carbon.However,in recent years,human disturbance has accelerated mangrove degradation.Among them,the emission of pollutants cannot be ignored.It is of great significance for carbon emission reduction and ecological protection to study the impacts of different pollutants on mangroves and their carbon stocks.Based on the remote sensing data of coastal areas south of the Yangtze River in China's Mainland,this paper builds the ensemble learning model Random Forest(RF)and Gradient Boosting Regression(GBR)to empirically analyse the relationship between industrial wastewater,industrial sulfur dioxide(SO2),PM2.5 and mangrove forests.The results show that the pollutant concentration of meteorological normalisation is more stable.The importance of pollutants presents regional heterogeneity.The area of mangroves in different cities and the corresponding total carbon stocks show different trends with the increase or decrease of pollutants,and there is a dynamic balance between urban pollutant discharge and mangrove growth in some cities.The research in this paper provides an analysis and explanation from the perspective of machine learning to explore the relationship between mangroves and pollutants and at the same time,provides scientific suggestions for the formulation of future pollutant emission policies in different cities. 展开更多
关键词 Mangrove forests POLLUTANTS Machine learning model Carbon stocks Regional heterogeneity
在线阅读 下载PDF
Bankruptcy Probability and Stock Prices: The Effect of Altman Z-Score Information on Stock Prices Through Panel Data 被引量:1
18
作者 Nicholas Apergis John Sorros Panagiotis Artikis Vasilios Zisis 《Journal of Modern Accounting and Auditing》 2011年第7期689-696,共8页
There is an extensive branch of literature that examines the success of Altman's Z-score in predicting bankruptcy or financial distress. The goal of this research paper is to investigate the stock price performance o... There is an extensive branch of literature that examines the success of Altman's Z-score in predicting bankruptcy or financial distress. The goal of this research paper is to investigate the stock price performance of firms that exhibit a large probability of bankruptcy according to the model of Airman. Regardless of the validity of Airman's Z-score, we utilize a new empirical design that relates stock price movements to Altman's Z-score. We focus and examine, through the methodology of panel data, whether stocks that have a high probability of bankruptcy underperform stocks with a low probability of bankruptcy or if there are differences in the way the markets react to the financial health of the sample firms. 展开更多
关键词 Airman's Z-score stock prices panel data
在线阅读 下载PDF
DAViS:a unified solution for data collection, analyzation,and visualization in real‑time stock market prediction
19
作者 Suppawong Tuarob Poom Wettayakorn +4 位作者 Ponpat Phetchai Siripong Traivijitkhun Sunghoon Lim Thanapon Noraset Tipajin Thaipisutikul 《Financial Innovation》 2021年第1期1232-1263,共32页
The explosion of online information with the recent advent of digital technology in information processing,information storing,information sharing,natural language processing,and text mining techniques has enabled sto... The explosion of online information with the recent advent of digital technology in information processing,information storing,information sharing,natural language processing,and text mining techniques has enabled stock investors to uncover market movement and volatility from heterogeneous content.For example,a typical stock market investor reads the news,explores market sentiment,and analyzes technical details in order to make a sound decision prior to purchasing or selling a particular company’s stock.However,capturing a dynamic stock market trend is challenging owing to high fluctuation and the non-stationary nature of the stock market.Although existing studies have attempted to enhance stock prediction,few have provided a complete decision-support system for investors to retrieve real-time data from multiple sources and extract insightful information for sound decision-making.To address the above challenge,we propose a unified solution for data collection,analysis,and visualization in real-time stock market prediction to retrieve and process relevant financial data from news articles,social media,and company technical information.We aim to provide not only useful information for stock investors but also meaningful visualization that enables investors to effectively interpret storyline events affecting stock prices.Specifically,we utilize an ensemble stacking of diversified machine-learning-based estimators and innovative contextual feature engineering to predict the next day’s stock prices.Experiment results show that our proposed stock forecasting method outperforms a traditional baseline with an average mean absolute percentage error of 0.93.Our findings confirm that leveraging an ensemble scheme of machine learning methods with contextual information improves stock prediction performance.Finally,our study could be further extended to a wide variety of innovative financial applications that seek to incorporate external insight from contextual information such as large-scale online news articles and social media data. 展开更多
关键词 Investment support system stock data visualization Time series analysis Ensemble machine learning Text mining
在线阅读 下载PDF
Eucalyptus carbon stock estimation in subtropical regions with the modeling strategy of sample plots–airborne LiDAR–Landsat time series data 被引量:1
20
作者 Xiandie Jiang Dengqiu Li +1 位作者 Guiying Li Dengsheng Lu 《Forest Ecosystems》 SCIE CSCD 2023年第6期700-716,共17页
Updating eucalyptus carbon stock data in a timely manner is essential for better understanding and quantifying its effects on ecological and hydrological processes.At present,there are no suitable methods to accuratel... Updating eucalyptus carbon stock data in a timely manner is essential for better understanding and quantifying its effects on ecological and hydrological processes.At present,there are no suitable methods to accurately estimate the eucalyptus carbon stock in a large area.This research aimed to explore the transferability of the eucalyptus carbon stock estimation model at temporal and spatial scales and assess modeling performance through the strategy of combining sample plots,airborne LiDAR and Landsat time series data in subtropical regions of China.Specifically,eucalyptus carbon stock estimates in typical sites were obtained by applying the developed models with the combination of airborne LiDAR and field measurement data;the eucalyptus plantation ages were estimated using the random localization segmentation approach from Landsat time series data;and regional models were developed by linking LiDAR-derived eucalyptus carbon stock and vegetation age(e.g.,months or years).To examine the models’robustness,the developed models at the regional scale were transferred to estimate carbon stocks at the spatial and temporal scales,and the modeling results were evaluated using validation samples accordingly.The results showed that carbon stock can be successfully estimated using the age-based models(both age variables in months and years as predictor variables),but the month-based models produced better estimates with a root mean square error(RMSE)of 6.51 t⋅ha1 for Yunxiao County,Fujian Province,and 6.33 t⋅ha1 for Gaofeng Forest Farm,Guangxi Zhuang Autonomous Region.Particularly,the month-based models were superior for estimating the carbon stocks of young eucalyptus plantations of less than two years.The model transferability analyses showed that the month-based models had higher transferability than the year-based models at the temporal scale,indicating their possibility for analysis of carbon stock change.However,both the month-based and year-based models expressed relatively poor transferability at a spatial scale.This study provides new insights for cost-effective monitoring of carbon stock change in intensively managed plantation forests. 展开更多
关键词 Forest carbon stock Eucalyptus plantation Airborne LiDAR Landsat time series Forest age
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部