Precast concrete pavements(PCPs)represent an innovative solution in the construction industry,addressing the need for rapid,intelligent,and low-carbon pavement technologies that significantly reduce construction time ...Precast concrete pavements(PCPs)represent an innovative solution in the construction industry,addressing the need for rapid,intelligent,and low-carbon pavement technologies that significantly reduce construction time and environmental impact.However,the integration of prefabricated technology in pavement surface and base layers lacks systematic classification and understanding.This paper aims to fill this gap by introducing a detailed analysis of discretization and assembly connection technology for cement concrete pavement(CCP)structures.Through a comprehensive review of domestic and international literature,the study classifies prefabricated pavement technology based on discrete assembly structural layers and presents specific conclusions(i)surface layer discrete units are categorized into bottom plates,top plates,plate-rod separated assemblies,and prestressed connections,with optimal material compositions identified to enhance mechanical properties;(ii)base layer discrete units include block-type,plate-type,and beam-type elements,highlighting their contributions to sustainability by incorporating recycled materials(iii)planar assembly connection types are assessed,ranking them by load transfer efficiency,with specific dimensions provided for optimal performance;and(iv)vertical assembly connections are defined by their leveling and sealing layers,suitable for both new constructions and repairs of existing roads.The insights gained from this review not only clarify the distinctions between various structural layers but also provide practical guidelines for enhancing the design and implementation of PCP.This work contributes to advancing sustainable and resilient road construction practices,making it a significant reference for researchers and practitioners in the field.展开更多
Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning ...Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.展开更多
To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fract...To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.展开更多
In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.T...In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.The computational efficiency and accuracy of the KL expansion are contingent upon the accurate resolution of the Fredholm integral eigenvalue problem(IEVP).The paper proposes an interpolation method based on different interpolation basis functions such as moving least squares(MLS),least squares(LS),and finite element method(FEM)to solve the IEVP.Compared with the Galerkin method based on finite element or Legendre polynomials,the main advantage of the interpolation method is that,in the calculation of eigenvalues and eigenfunctions in one-dimensional random fields,the integral matrix containing covariance function only requires a single integral,which is less than a two-folded integral by the Galerkin method.The effectiveness and computational efficiency of the proposed interpolation method are verified through various one-dimensional examples.Furthermore,based on theKL expansion and polynomial chaos expansion,the stochastic analysis of two-dimensional regular and irregular domains is conducted,and the basis function of the extended finite element method(XFEM)is introduced as the interpolation basis function in two-dimensional irregular domains to solve the IEVP.展开更多
We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadra...We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion.These two elements can be adapted for any type of curve,leading to algorithms dedicated to the shape of specific curves.While the calculation of the tangent quadrant for various curves,such as lines,conics,or cubics,is simple,it is more complex to analyze how pixels are traversed by the curve.In the case of conic arcs,we found a criterion for determining the pixel exit side.This leads us to present a new algorithm,called CURDIS-C,specific to the discretization of conics,for which we provide all the details.Surprisingly,the criterion for conics requires between one and three sign tests and four additions per pixel,making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations.Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel,achieving this generality at the cost of potentially computing up to two square roots per arc.We illustrate the use of CURDIS for the discretization of different curves,such as ellipses,hyperbolas,and parabolas,even when they degenerate into lines or corners.展开更多
Discrete fracture network(DFN)commonly existing in natural rock masses plays an important role in geological complexity which can influence rock fracturing behaviour during fluid injection.This paper simulated the hyd...Discrete fracture network(DFN)commonly existing in natural rock masses plays an important role in geological complexity which can influence rock fracturing behaviour during fluid injection.This paper simulated the hydraulic fracturing process in lab-scale coal samples with DFNs and the induced seismic activities by the discrete element method(DEM).The effects of DFNs on hydraulic fracturing,induced seismicity and elastic property changes have been concluded.Denser DFNs can comprehensively decrease the peak injection pressure and injection duration.The proportion of strong seismic events increases first and then decreases with increasing DFN density.In addition,the relative modulus of the rock mass is derived innovatively from breakdown pressure,breakdown fracture length and the related initiation time.Increasing DFN densities among large(35–60 degrees)and small(0–30 degrees)fracture dip angles show opposite evolution trends in relative modulus.The transitional point(dip angle)for the opposite trends is also proportionally affected by the friction angle of the rock mass.The modelling results have much practical meaning to infer the density and geometry of pre-existing fractures and the elastic property of rock mass in the field,simply based on the hydraulic fracturing and induced seismicity monitoring data.展开更多
We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpr...We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.展开更多
A data-driven model ofmultiple variable cutting(M-VCUT)level set-based substructure is proposed for the topology optimization of lattice structures.TheM-VCUTlevel setmethod is used to represent substructures,enriching...A data-driven model ofmultiple variable cutting(M-VCUT)level set-based substructure is proposed for the topology optimization of lattice structures.TheM-VCUTlevel setmethod is used to represent substructures,enriching their diversity of configuration while ensuring connectivity.To construct the data-driven model of substructure,a database is prepared by sampling the space of substructures spanned by several substructure prototypes.Then,for each substructure in this database,the stiffness matrix is condensed so that its degrees of freedomare reduced.Thereafter,the data-drivenmodel of substructures is constructed through interpolationwith compactly supported radial basis function(CS-RBF).The inputs of the data-driven model are the design variables of topology optimization,and the outputs are the condensed stiffness matrix and volume of substructures.During the optimization,this data-driven model is used,thus avoiding repeated static condensation that would requiremuch computation time.Several numerical examples are provided to verify the proposed method.展开更多
The outstanding comprehensive mechanical properties of newly developed hybrid lattice structures make them useful in engineering applications for bearing multiple mechanical loads.Additive-manufacturing technologies m...The outstanding comprehensive mechanical properties of newly developed hybrid lattice structures make them useful in engineering applications for bearing multiple mechanical loads.Additive-manufacturing technologies make it possible to fabricate these highly spatially programmable structures and greatly enhance the freedom in their design.However,traditional analytical methods do not sufficiently reflect the actual vibration-damping mechanism of lattice structures and are limited by their high computational cost.In this study,a hybrid lattice structure consisting of various cells was designed based on quasi-static and vibration experiments.Subsequently,a novel parametric design method based on a data-driven approach was developed for hybrid lattices with engineered properties.The response surface method was adopted to define the sensitive optimization target.A prediction model for the lattice geometric parameters and vibration properties was established using a backpropagation neural network.Then,it was integrated into the genetic algorithm to create the optimal hybrid lattice with varying geometric features and the required wide-band vibration-damping characteristics.Validation experiments were conducted,demonstrating that the optimized hybrid lattice can achieve the target properties.In addition,the data-driven parametric design method can reduce computation time and be widely applied to complex structural designs when analytical and empirical solutions are unavailable.展开更多
The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communica...The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communications using a finite number of pilots.On the other hand,Deep Learning(DL)approaches have been very successful in wireless OFDM communications.However,whether they will work underwater is still a mystery.For the first time,this paper compares two categories of DL-based UWA OFDM receivers:the DataDriven(DD)method,which performs as an end-to-end black box,and the Model-Driven(MD)method,also known as the model-based data-driven method,which combines DL and expert OFDM receiver knowledge.The encoder-decoder framework and Convolutional Neural Network(CNN)structure are employed to establish the DD receiver.On the other hand,an unfolding-based Minimum Mean Square Error(MMSE)structure is adopted for the MD receiver.We analyze the characteristics of different receivers by Monte Carlo simulations under diverse communications conditions and propose a strategy for selecting a proper receiver under different communication scenarios.Field trials in the pool and sea are also conducted to verify the feasibility and advantages of the DL receivers.It is observed that DL receivers perform better than conventional receivers in terms of bit error rate.展开更多
When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding bia...When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding biased data selection,ameliorating overconfident models,and being flexible to varying practical objectives,especially when the training and testing data are not identically distributed.A workflow characterized by leveraging Bayesian methodology was proposed to address these issues.Employing a Multi-Layer Perceptron(MLP)as the foundational model,this approach was benchmarked against empirical methods and advanced algorithms for its efficacy in simplicity,accuracy,and resistance to overfitting.The analysis revealed that,while MLP models optimized via maximum a posteriori algorithm suffices for straightforward scenarios,Bayesian neural networks showed great potential for preventing overfitting.Additionally,integrating decision thresholds through various evaluative principles offers insights for challenging decisions.Two case studies demonstrate the framework's capacity for nuanced interpretation of in situ data,employing a model committee for a detailed evaluation of liquefaction potential via Monte Carlo simulations and basic statistics.Overall,the proposed step-by-step workflow for analyzing seismic liquefaction incorporates multifold testing and real-world data validation,showing improved robustness against overfitting and greater versatility in addressing practical challenges.This research contributes to the seismic liquefaction assessment field by providing a structured,adaptable methodology for accurate and reliable analysis.展开更多
Hydraulic fracturing technology has achieved remarkable results in improving the production of tight gas reservoirs,but its effectiveness is under the joint action of multiple factors of complexity.Traditional analysi...Hydraulic fracturing technology has achieved remarkable results in improving the production of tight gas reservoirs,but its effectiveness is under the joint action of multiple factors of complexity.Traditional analysis methods have limitations in dealing with these complex and interrelated factors,and it is difficult to fully reveal the actual contribution of each factor to the production.Machine learning-based methods explore the complex mapping relationships between large amounts of data to provide datadriven insights into the key factors driving production.In this study,a data-driven PCA-RF-VIM(Principal Component Analysis-Random Forest-Variable Importance Measures)approach of analyzing the importance of features is proposed to identify the key factors driving post-fracturing production.Four types of parameters,including log parameters,geological and reservoir physical parameters,hydraulic fracturing design parameters,and reservoir stimulation parameters,were inputted into the PCA-RF-VIM model.The model was trained using 6-fold cross-validation and grid search,and the relative importance ranking of each factor was finally obtained.In order to verify the validity of the PCA-RF-VIM model,a consolidation model that uses three other independent data-driven methods(Pearson correlation coefficient,RF feature significance analysis method,and XGboost feature significance analysis method)are applied to compare with the PCA-RF-VIM model.A comparison the two models shows that they contain almost the same parameters in the top ten,with only minor differences in one parameter.In combination with the reservoir characteristics,the reasonableness of the PCA-RF-VIM model is verified,and the importance ranking of the parameters by this method is more consistent with the reservoir characteristics of the study area.Ultimately,the ten parameters are selected as the controlling factors that have the potential to influence post-fracturing gas production,as the combined importance of these top ten parameters is 91.95%on driving natural gas production.Analyzing and obtaining these ten controlling factors provides engineers with a new insight into the reservoir selection for fracturing stimulation and fracturing parameter optimization to improve fracturing efficiency and productivity.展开更多
In the rapidly evolving technological landscape,state-owned enterprises(SOEs)encounter significant challenges in sustaining their competitiveness through efficient R&D management.Integrated Product Development(IPD...In the rapidly evolving technological landscape,state-owned enterprises(SOEs)encounter significant challenges in sustaining their competitiveness through efficient R&D management.Integrated Product Development(IPD),with its emphasis on cross-functional teamwork,concurrent engineering,and data-driven decision-making,has been widely recognized for enhancing R&D efficiency and product quality.However,the unique characteristics of SOEs pose challenges to the effective implementation of IPD.The advancement of big data and artificial intelligence technologies offers new opportunities for optimizing IPD R&D management through data-driven decision-making models.This paper constructs and validates a data-driven decision-making model tailored to the IPD R&D management of SOEs.By integrating data mining,machine learning,and other advanced analytical techniques,the model serves as a scientific and efficient decision-making tool.It aids SOEs in optimizing R&D resource allocation,shortening product development cycles,reducing R&D costs,and improving product quality and innovation.Moreover,this study contributes to a deeper theoretical understanding of the value of data-driven decision-making in the context of IPD.展开更多
This paper focuses on the numerical solution of a tumor growth model under a data-driven approach.Based on the inherent laws of the data and reasonable assumptions,an ordinary differential equation model for tumor gro...This paper focuses on the numerical solution of a tumor growth model under a data-driven approach.Based on the inherent laws of the data and reasonable assumptions,an ordinary differential equation model for tumor growth is established.Nonlinear fitting is employed to obtain the optimal parameter estimation of the mathematical model,and the numerical solution is carried out using the Matlab software.By comparing the clinical data with the simulation results,a good agreement is achieved,which verifies the rationality and feasibility of the model.展开更多
With the development of cyber-physical systems,system security faces more risks from cyber-attacks.In this work,we study the problem that an external attacker implements covert sensor and actuator attacks with resourc...With the development of cyber-physical systems,system security faces more risks from cyber-attacks.In this work,we study the problem that an external attacker implements covert sensor and actuator attacks with resource constraints(the total resource consumption of the attacks is not greater than a given initial resource of the attacker)to mislead a discrete event system under supervisory control to reach unsafe states.We consider that the attacker can implement two types of attacks:One by modifying the sensor readings observed by a supervisor and the other by enabling the actuator commands disabled by the supervisor.Each attack has its corresponding resource consumption and remains covert.To solve this problem,we first introduce a notion of combined-attackability to determine whether a closedloop system may reach an unsafe state after receiving attacks with resource constraints.We develop an algorithm to construct a corrupted supervisor under attacks,provide a verification method for combined-attackability in polynomial time based on a plant,a corrupted supervisor,and an attacker's initial resource,and propose a corresponding attack synthesis algorithm.The effectiveness of the proposed method is illustrated by an example.展开更多
Based on the educational evaluation reform,this study explores the construction of an evidence-based value-added evaluation system based on data-driven,aiming to solve the limitations of traditional evaluation methods...Based on the educational evaluation reform,this study explores the construction of an evidence-based value-added evaluation system based on data-driven,aiming to solve the limitations of traditional evaluation methods.The research adopts the method of combining theoretical analysis and practical application,and designs the evidence-based value-added evaluation framework,which includes the core elements of a multi-source heterogeneous data acquisition and processing system,a value-added evaluation agent based on a large model,and an evaluation implementation and application mechanism.Through empirical research verification,the evaluation system has remarkable effects in improving learning participation,promoting ability development,and supporting teaching decision-making,and provides a theoretical reference and practical path for educational evaluation reform in the new era.The research shows that the evidence-based value-added evaluation system based on data-driven can reflect students’actual progress more fairly and objectively by accurately measuring the difference in starting point and development range of students,and provide strong support for the realization of high-quality education development.展开更多
The impacts of lateral boundary conditions(LBCs)provided by numerical models and data-driven networks on convective-scale ensemble forecasts are investigated in this study.Four experiments are conducted on the Hangzho...The impacts of lateral boundary conditions(LBCs)provided by numerical models and data-driven networks on convective-scale ensemble forecasts are investigated in this study.Four experiments are conducted on the Hangzhou RDP(19th Hangzhou Asian Games Research Development Project on Convective-scale Ensemble Prediction and Application)testbed,with the LBCs respectively sourced from National Centers for Environmental Prediction(NCEP)Global Forecast System(GFS)forecasts with 33 vertical levels(Exp_GFS),Pangu forecasts with 13 vertical levels(Exp_Pangu),Fuxi forecasts with 13 vertical levels(Exp_Fuxi),and NCEP GFS forecasts with the vertical levels reduced to 13(the same as those of Exp_Pangu and Exp_Fuxi)(Exp_GFSRDV).In general,Exp_Pangu performs comparably to Exp_GFS,while Exp_Fuxi shows slightly inferior performance compared to Exp_Pangu,possibly due to its less accurate large-scale predictions.Therefore,the ability of using data-driven networks to efficiently provide LBCs for convective-scale ensemble forecasts has been demonstrated.Moreover,Exp_GFSRDV has the worst convective-scale forecasts among the four experiments,which indicates the potential improvement of using data-driven networks for LBCs by increasing the vertical levels of the networks.However,the ensemble spread of the four experiments barely increases with lead time.Thus,each experiment has insufficient ensemble spread to present realistic forecast uncertainties,which will be investigated in a future study.展开更多
With the rapid advancement of machine learning technology and its growing adoption in research and engineering applications,an increasing number of studies have embraced data-driven approaches for modeling wind turbin...With the rapid advancement of machine learning technology and its growing adoption in research and engineering applications,an increasing number of studies have embraced data-driven approaches for modeling wind turbine wakes.These models leverage the ability to capture complex,high-dimensional characteristics of wind turbine wakes while offering significantly greater efficiency in the prediction process than physics-driven models.As a result,data-driven wind turbine wake models are regarded as powerful and effective tools for predicting wake behavior and turbine power output.This paper aims to provide a concise yet comprehensive review of existing studies on wind turbine wake modeling that employ data-driven approaches.It begins by defining and classifying machine learning methods to facilitate a clearer understanding of the reviewed literature.Subsequently,the related studies are categorized into four key areas:wind turbine power prediction,data-driven analytic wake models,wake field reconstruction,and the incorporation of explicit physical constraints.The accuracy of data-driven models is influenced by two primary factors:the quality of the training data and the performance of the model itself.Accordingly,both data accuracy and model structure are discussed in detail within the review.展开更多
NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large lan...NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science.展开更多
Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embe...Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embedding a unique identifier that identifies the owner of the content.Image watermarking can also be used to verify the authenticity of digital media,such as images or videos,by ascertaining the watermark information.In this paper,a mathematical chaos-based image watermarking technique is proposed using discrete wavelet transform(DWT),chaotic map,and Laplacian operator.The DWT can be used to decompose the image into its frequency components,chaos is used to provide extra security defense by encrypting the watermark signal,and the Laplacian operator with optimization is applied to the mid-frequency bands to find the sharp areas in the image.These mid-frequency bands are used to embed the watermarks by modifying the coefficients in these bands.The mid-sub-band maintains the invisible property of the watermark,and chaos combined with the second-order derivative Laplacian is vulnerable to attacks.Comprehensive experiments demonstrate that this approach is effective for common signal processing attacks,i.e.,compression,noise addition,and filtering.Moreover,this approach also maintains image quality through peak signal-to-noise ratio(PSNR)and structural similarity index metrics(SSIM).The highest achieved PSNR and SSIM values are 55.4 dB and 1.In the same way,normalized correlation(NC)values are almost 10%–20%higher than comparative research.These results support assistance in copyright protection in multimedia content.展开更多
基金supported by the Research Program of Wuhan Building Energy Efficiency Office(grant number 202331).
文摘Precast concrete pavements(PCPs)represent an innovative solution in the construction industry,addressing the need for rapid,intelligent,and low-carbon pavement technologies that significantly reduce construction time and environmental impact.However,the integration of prefabricated technology in pavement surface and base layers lacks systematic classification and understanding.This paper aims to fill this gap by introducing a detailed analysis of discretization and assembly connection technology for cement concrete pavement(CCP)structures.Through a comprehensive review of domestic and international literature,the study classifies prefabricated pavement technology based on discrete assembly structural layers and presents specific conclusions(i)surface layer discrete units are categorized into bottom plates,top plates,plate-rod separated assemblies,and prestressed connections,with optimal material compositions identified to enhance mechanical properties;(ii)base layer discrete units include block-type,plate-type,and beam-type elements,highlighting their contributions to sustainability by incorporating recycled materials(iii)planar assembly connection types are assessed,ranking them by load transfer efficiency,with specific dimensions provided for optimal performance;and(iv)vertical assembly connections are defined by their leveling and sealing layers,suitable for both new constructions and repairs of existing roads.The insights gained from this review not only clarify the distinctions between various structural layers but also provide practical guidelines for enhancing the design and implementation of PCP.This work contributes to advancing sustainable and resilient road construction practices,making it a significant reference for researchers and practitioners in the field.
基金supported by the National Natural Science Foundation of China(U21A20166)in part by the Science and Technology Development Foundation of Jilin Province (20230508095RC)+1 种基金in part by the Development and Reform Commission Foundation of Jilin Province (2023C034-3)in part by the Exploration Foundation of State Key Laboratory of Automotive Simulation and Control。
文摘Aiming at the tracking problem of a class of discrete nonaffine nonlinear multi-input multi-output(MIMO) repetitive systems subjected to separable and nonseparable disturbances, a novel data-driven iterative learning control(ILC) scheme based on the zeroing neural networks(ZNNs) is proposed. First, the equivalent dynamic linearization data model is obtained by means of dynamic linearization technology, which exists theoretically in the iteration domain. Then, the iterative extended state observer(IESO) is developed to estimate the disturbance and the coupling between systems, and the decoupled dynamic linearization model is obtained for the purpose of controller synthesis. To solve the zero-seeking tracking problem with inherent tolerance of noise,an ILC based on noise-tolerant modified ZNN is proposed. The strict assumptions imposed on the initialization conditions of each iteration in the existing ILC methods can be absolutely removed with our method. In addition, theoretical analysis indicates that the modified ZNN can converge to the exact solution of the zero-seeking tracking problem. Finally, a generalized example and an application-oriented example are presented to verify the effectiveness and superiority of the proposed process.
基金funded by the project of the Major Scientific and Technological Projects of CNOOC in the 14th Five-Year Plan(No.KJGG2022-0701)the CNOOC Research Institute(No.2020PFS-03).
文摘To analyze the differences in the transport and distribution of different types of proppants and to address issues such as the short effective support of proppant and poor placement in hydraulically intersecting fractures,this study considered the combined impact of geological-engineering factors on conductivity.Using reservoir production parameters and the discrete elementmethod,multispherical proppants were constructed.Additionally,a 3D fracture model,based on the specified conditions of the L block,employed coupled(Computational Fluid Dynamics)CFD-DEM(Discrete ElementMethod)for joint simulations to quantitatively analyze the transport and placement patterns of multispherical proppants in intersecting fractures.Results indicate that turbulent kinetic energy is an intrinsic factor affecting proppant transport.Moreover,the efficiency of placement and migration distance of low-sphericity quartz sand constructed by the DEM in the main fracture are significantly reduced compared to spherical ceramic proppants,with a 27.7%decrease in the volume fraction of the fracture surface,subsequently affecting the placement concentration and damaging fracture conductivity.Compared to small-angle fractures,controlling artificial and natural fractures to expand at angles of 45°to 60°increases the effective support length by approximately 20.6%.During hydraulic fracturing of gas wells,ensuring the fracture support area and post-closure conductivity can be achieved by controlling the sphericity of proppants and adjusting the perforation direction to control the direction of artificial fractures.
基金The authors gratefully acknowledge the support provided by the Postgraduate Research&Practice Program of Jiangsu Province(Grant No.KYCX18_0526)the Fundamental Research Funds for the Central Universities(Grant No.2018B682X14)Guangdong Basic and Applied Basic Research Foundation(No.2021A1515110807).
文摘In the context of global mean square error concerning the number of random variables in the representation,the Karhunen–Loève(KL)expansion is the optimal series expansion method for random field discretization.The computational efficiency and accuracy of the KL expansion are contingent upon the accurate resolution of the Fredholm integral eigenvalue problem(IEVP).The paper proposes an interpolation method based on different interpolation basis functions such as moving least squares(MLS),least squares(LS),and finite element method(FEM)to solve the IEVP.Compared with the Galerkin method based on finite element or Legendre polynomials,the main advantage of the interpolation method is that,in the calculation of eigenvalues and eigenfunctions in one-dimensional random fields,the integral matrix containing covariance function only requires a single integral,which is less than a two-folded integral by the Galerkin method.The effectiveness and computational efficiency of the proposed interpolation method are verified through various one-dimensional examples.Furthermore,based on theKL expansion and polynomial chaos expansion,the stochastic analysis of two-dimensional regular and irregular domains is conducted,and the basis function of the extended finite element method(XFEM)is introduced as the interpolation basis function in two-dimensional irregular domains to solve the IEVP.
文摘We introduce CURDIS,a template for algorithms to discretize arcs of regular curves by incrementally producing a list of support pixels covering the arc.In this template,algorithms proceed by finding the tangent quadrant at each point of the arc and determining which side the curve exits the pixel according to a tailored criterion.These two elements can be adapted for any type of curve,leading to algorithms dedicated to the shape of specific curves.While the calculation of the tangent quadrant for various curves,such as lines,conics,or cubics,is simple,it is more complex to analyze how pixels are traversed by the curve.In the case of conic arcs,we found a criterion for determining the pixel exit side.This leads us to present a new algorithm,called CURDIS-C,specific to the discretization of conics,for which we provide all the details.Surprisingly,the criterion for conics requires between one and three sign tests and four additions per pixel,making the algorithm efficient for resource-constrained systems and feasible for fixed-point or integer arithmetic implementations.Our algorithm also perfectly handles the pathological cases in which the conic intersects a pixel twice or changes quadrants multiple times within this pixel,achieving this generality at the cost of potentially computing up to two square roots per arc.We illustrate the use of CURDIS for the discretization of different curves,such as ellipses,hyperbolas,and parabolas,even when they degenerate into lines or corners.
基金Australian Research Council Linkage Program(LP200301404)for sponsoring this researchthe financial support provided by the State Key Laboratory of Geohazard Prevention and Geoenvironment Protection(Chengdu University of Technology,SKLGP2021K002)National Natural Science Foundation of China(52374101,32111530138).
文摘Discrete fracture network(DFN)commonly existing in natural rock masses plays an important role in geological complexity which can influence rock fracturing behaviour during fluid injection.This paper simulated the hydraulic fracturing process in lab-scale coal samples with DFNs and the induced seismic activities by the discrete element method(DEM).The effects of DFNs on hydraulic fracturing,induced seismicity and elastic property changes have been concluded.Denser DFNs can comprehensively decrease the peak injection pressure and injection duration.The proportion of strong seismic events increases first and then decreases with increasing DFN density.In addition,the relative modulus of the rock mass is derived innovatively from breakdown pressure,breakdown fracture length and the related initiation time.Increasing DFN densities among large(35–60 degrees)and small(0–30 degrees)fracture dip angles show opposite evolution trends in relative modulus.The transitional point(dip angle)for the opposite trends is also proportionally affected by the friction angle of the rock mass.The modelling results have much practical meaning to infer the density and geometry of pre-existing fractures and the elastic property of rock mass in the field,simply based on the hydraulic fracturing and induced seismicity monitoring data.
基金supported by National Key Research and Development Program (2019YFA0708301)National Natural Science Foundation of China (51974337)+2 种基金the Strategic Cooperation Projects of CNPC and CUPB (ZLZX2020-03)Science and Technology Innovation Fund of CNPC (2021DQ02-0403)Open Fund of Petroleum Exploration and Development Research Institute of CNPC (2022-KFKT-09)
文摘We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.
基金supported by the National Natural Science Foundation of China(Grant No.12272144).
文摘A data-driven model ofmultiple variable cutting(M-VCUT)level set-based substructure is proposed for the topology optimization of lattice structures.TheM-VCUTlevel setmethod is used to represent substructures,enriching their diversity of configuration while ensuring connectivity.To construct the data-driven model of substructure,a database is prepared by sampling the space of substructures spanned by several substructure prototypes.Then,for each substructure in this database,the stiffness matrix is condensed so that its degrees of freedomare reduced.Thereafter,the data-drivenmodel of substructures is constructed through interpolationwith compactly supported radial basis function(CS-RBF).The inputs of the data-driven model are the design variables of topology optimization,and the outputs are the condensed stiffness matrix and volume of substructures.During the optimization,this data-driven model is used,thus avoiding repeated static condensation that would requiremuch computation time.Several numerical examples are provided to verify the proposed method.
基金supported by National Natural Science Foundation of China(Grant No.52375380)National Key R&D Program of China(Grant No.2022YFB3402200)the Key Project of National Natural Science Foundation of China(Grant No.12032018).
文摘The outstanding comprehensive mechanical properties of newly developed hybrid lattice structures make them useful in engineering applications for bearing multiple mechanical loads.Additive-manufacturing technologies make it possible to fabricate these highly spatially programmable structures and greatly enhance the freedom in their design.However,traditional analytical methods do not sufficiently reflect the actual vibration-damping mechanism of lattice structures and are limited by their high computational cost.In this study,a hybrid lattice structure consisting of various cells was designed based on quasi-static and vibration experiments.Subsequently,a novel parametric design method based on a data-driven approach was developed for hybrid lattices with engineered properties.The response surface method was adopted to define the sensitive optimization target.A prediction model for the lattice geometric parameters and vibration properties was established using a backpropagation neural network.Then,it was integrated into the genetic algorithm to create the optimal hybrid lattice with varying geometric features and the required wide-band vibration-damping characteristics.Validation experiments were conducted,demonstrating that the optimized hybrid lattice can achieve the target properties.In addition,the data-driven parametric design method can reduce computation time and be widely applied to complex structural designs when analytical and empirical solutions are unavailable.
基金funded in part by the National Natural Science Foundation of China under Grant 62401167 and 62192712in part by the Key Laboratory of Marine Environmental Survey Technology and Application,Ministry of Natural Resources,P.R.China under Grant MESTA-2023-B001in part by the Stable Supporting Fund of National Key Laboratory of Underwater Acoustic Technology under Grant JCKYS2022604SSJS007.
文摘The Underwater Acoustic(UWA)channel is bandwidth-constrained and experiences doubly selective fading.It is challenging to acquire perfect channel knowledge for Orthogonal Frequency Division Multiplexing(OFDM)communications using a finite number of pilots.On the other hand,Deep Learning(DL)approaches have been very successful in wireless OFDM communications.However,whether they will work underwater is still a mystery.For the first time,this paper compares two categories of DL-based UWA OFDM receivers:the DataDriven(DD)method,which performs as an end-to-end black box,and the Model-Driven(MD)method,also known as the model-based data-driven method,which combines DL and expert OFDM receiver knowledge.The encoder-decoder framework and Convolutional Neural Network(CNN)structure are employed to establish the DD receiver.On the other hand,an unfolding-based Minimum Mean Square Error(MMSE)structure is adopted for the MD receiver.We analyze the characteristics of different receivers by Monte Carlo simulations under diverse communications conditions and propose a strategy for selecting a proper receiver under different communication scenarios.Field trials in the pool and sea are also conducted to verify the feasibility and advantages of the DL receivers.It is observed that DL receivers perform better than conventional receivers in terms of bit error rate.
文摘When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding biased data selection,ameliorating overconfident models,and being flexible to varying practical objectives,especially when the training and testing data are not identically distributed.A workflow characterized by leveraging Bayesian methodology was proposed to address these issues.Employing a Multi-Layer Perceptron(MLP)as the foundational model,this approach was benchmarked against empirical methods and advanced algorithms for its efficacy in simplicity,accuracy,and resistance to overfitting.The analysis revealed that,while MLP models optimized via maximum a posteriori algorithm suffices for straightforward scenarios,Bayesian neural networks showed great potential for preventing overfitting.Additionally,integrating decision thresholds through various evaluative principles offers insights for challenging decisions.Two case studies demonstrate the framework's capacity for nuanced interpretation of in situ data,employing a model committee for a detailed evaluation of liquefaction potential via Monte Carlo simulations and basic statistics.Overall,the proposed step-by-step workflow for analyzing seismic liquefaction incorporates multifold testing and real-world data validation,showing improved robustness against overfitting and greater versatility in addressing practical challenges.This research contributes to the seismic liquefaction assessment field by providing a structured,adaptable methodology for accurate and reliable analysis.
基金funded by the Key Research and Development Program of Shaanxi,China(No.2024GX-YBXM-503)the National Natural Science Foundation of China(No.51974254)。
文摘Hydraulic fracturing technology has achieved remarkable results in improving the production of tight gas reservoirs,but its effectiveness is under the joint action of multiple factors of complexity.Traditional analysis methods have limitations in dealing with these complex and interrelated factors,and it is difficult to fully reveal the actual contribution of each factor to the production.Machine learning-based methods explore the complex mapping relationships between large amounts of data to provide datadriven insights into the key factors driving production.In this study,a data-driven PCA-RF-VIM(Principal Component Analysis-Random Forest-Variable Importance Measures)approach of analyzing the importance of features is proposed to identify the key factors driving post-fracturing production.Four types of parameters,including log parameters,geological and reservoir physical parameters,hydraulic fracturing design parameters,and reservoir stimulation parameters,were inputted into the PCA-RF-VIM model.The model was trained using 6-fold cross-validation and grid search,and the relative importance ranking of each factor was finally obtained.In order to verify the validity of the PCA-RF-VIM model,a consolidation model that uses three other independent data-driven methods(Pearson correlation coefficient,RF feature significance analysis method,and XGboost feature significance analysis method)are applied to compare with the PCA-RF-VIM model.A comparison the two models shows that they contain almost the same parameters in the top ten,with only minor differences in one parameter.In combination with the reservoir characteristics,the reasonableness of the PCA-RF-VIM model is verified,and the importance ranking of the parameters by this method is more consistent with the reservoir characteristics of the study area.Ultimately,the ten parameters are selected as the controlling factors that have the potential to influence post-fracturing gas production,as the combined importance of these top ten parameters is 91.95%on driving natural gas production.Analyzing and obtaining these ten controlling factors provides engineers with a new insight into the reservoir selection for fracturing stimulation and fracturing parameter optimization to improve fracturing efficiency and productivity.
文摘In the rapidly evolving technological landscape,state-owned enterprises(SOEs)encounter significant challenges in sustaining their competitiveness through efficient R&D management.Integrated Product Development(IPD),with its emphasis on cross-functional teamwork,concurrent engineering,and data-driven decision-making,has been widely recognized for enhancing R&D efficiency and product quality.However,the unique characteristics of SOEs pose challenges to the effective implementation of IPD.The advancement of big data and artificial intelligence technologies offers new opportunities for optimizing IPD R&D management through data-driven decision-making models.This paper constructs and validates a data-driven decision-making model tailored to the IPD R&D management of SOEs.By integrating data mining,machine learning,and other advanced analytical techniques,the model serves as a scientific and efficient decision-making tool.It aids SOEs in optimizing R&D resource allocation,shortening product development cycles,reducing R&D costs,and improving product quality and innovation.Moreover,this study contributes to a deeper theoretical understanding of the value of data-driven decision-making in the context of IPD.
基金National Natural Science Foundation of China(Project No.:12371428)Projects of the Provincial College Students’Innovation and Training Program in 2024(Project No.:S202413023106,S202413023110)。
文摘This paper focuses on the numerical solution of a tumor growth model under a data-driven approach.Based on the inherent laws of the data and reasonable assumptions,an ordinary differential equation model for tumor growth is established.Nonlinear fitting is employed to obtain the optimal parameter estimation of the mathematical model,and the numerical solution is carried out using the Matlab software.By comparing the clinical data with the simulation results,a good agreement is achieved,which verifies the rationality and feasibility of the model.
基金partially supported by the Science Technology Development Fund,Macao Special Administrative Region(0029/2023/RIA1)the National Research Foundation Singapore under its AI Singapore Programme(AISG2-GC-2023-007)
文摘With the development of cyber-physical systems,system security faces more risks from cyber-attacks.In this work,we study the problem that an external attacker implements covert sensor and actuator attacks with resource constraints(the total resource consumption of the attacks is not greater than a given initial resource of the attacker)to mislead a discrete event system under supervisory control to reach unsafe states.We consider that the attacker can implement two types of attacks:One by modifying the sensor readings observed by a supervisor and the other by enabling the actuator commands disabled by the supervisor.Each attack has its corresponding resource consumption and remains covert.To solve this problem,we first introduce a notion of combined-attackability to determine whether a closedloop system may reach an unsafe state after receiving attacks with resource constraints.We develop an algorithm to construct a corrupted supervisor under attacks,provide a verification method for combined-attackability in polynomial time based on a plant,a corrupted supervisor,and an attacker's initial resource,and propose a corresponding attack synthesis algorithm.The effectiveness of the proposed method is illustrated by an example.
基金This paper is the research result of“Research on Innovation of Evidence-Based Teaching Paradigm in Vocational Education under the Background of New Quality Productivity”(2024JXQ176)the Shandong Province Artificial Intelligence Education Research Project(SDDJ202501035),which explores the application of artificial intelligence big models in student value-added evaluation from an evidence-based perspective。
文摘Based on the educational evaluation reform,this study explores the construction of an evidence-based value-added evaluation system based on data-driven,aiming to solve the limitations of traditional evaluation methods.The research adopts the method of combining theoretical analysis and practical application,and designs the evidence-based value-added evaluation framework,which includes the core elements of a multi-source heterogeneous data acquisition and processing system,a value-added evaluation agent based on a large model,and an evaluation implementation and application mechanism.Through empirical research verification,the evaluation system has remarkable effects in improving learning participation,promoting ability development,and supporting teaching decision-making,and provides a theoretical reference and practical path for educational evaluation reform in the new era.The research shows that the evidence-based value-added evaluation system based on data-driven can reflect students’actual progress more fairly and objectively by accurately measuring the difference in starting point and development range of students,and provide strong support for the realization of high-quality education development.
基金supported by the Strategic Research and Consulting Project of the Chinese Academy of Engineering[grant number 2024-XBZD-14]the National Natural Science Foundation of China[grant numbers 42192553 and 41922036]the Fundamental Research Funds for the Central Universities–Cemac“GeoX”Interdisciplinary Program[grant number 020714380207]。
文摘The impacts of lateral boundary conditions(LBCs)provided by numerical models and data-driven networks on convective-scale ensemble forecasts are investigated in this study.Four experiments are conducted on the Hangzhou RDP(19th Hangzhou Asian Games Research Development Project on Convective-scale Ensemble Prediction and Application)testbed,with the LBCs respectively sourced from National Centers for Environmental Prediction(NCEP)Global Forecast System(GFS)forecasts with 33 vertical levels(Exp_GFS),Pangu forecasts with 13 vertical levels(Exp_Pangu),Fuxi forecasts with 13 vertical levels(Exp_Fuxi),and NCEP GFS forecasts with the vertical levels reduced to 13(the same as those of Exp_Pangu and Exp_Fuxi)(Exp_GFSRDV).In general,Exp_Pangu performs comparably to Exp_GFS,while Exp_Fuxi shows slightly inferior performance compared to Exp_Pangu,possibly due to its less accurate large-scale predictions.Therefore,the ability of using data-driven networks to efficiently provide LBCs for convective-scale ensemble forecasts has been demonstrated.Moreover,Exp_GFSRDV has the worst convective-scale forecasts among the four experiments,which indicates the potential improvement of using data-driven networks for LBCs by increasing the vertical levels of the networks.However,the ensemble spread of the four experiments barely increases with lead time.Thus,each experiment has insufficient ensemble spread to present realistic forecast uncertainties,which will be investigated in a future study.
基金Supported by the National Natural Science Foundation of China under Grant No.52131102.
文摘With the rapid advancement of machine learning technology and its growing adoption in research and engineering applications,an increasing number of studies have embraced data-driven approaches for modeling wind turbine wakes.These models leverage the ability to capture complex,high-dimensional characteristics of wind turbine wakes while offering significantly greater efficiency in the prediction process than physics-driven models.As a result,data-driven wind turbine wake models are regarded as powerful and effective tools for predicting wake behavior and turbine power output.This paper aims to provide a concise yet comprehensive review of existing studies on wind turbine wake modeling that employ data-driven approaches.It begins by defining and classifying machine learning methods to facilitate a clearer understanding of the reviewed literature.Subsequently,the related studies are categorized into four key areas:wind turbine power prediction,data-driven analytic wake models,wake field reconstruction,and the incorporation of explicit physical constraints.The accuracy of data-driven models is influenced by two primary factors:the quality of the training data and the performance of the model itself.Accordingly,both data accuracy and model structure are discussed in detail within the review.
基金supported by the Jiangsu Provincial Science and Technology Project Basic Research Program(Natural Science Foundation of Jiangsu Province)(No.BK20211283).
文摘NJmat is a user-friendly,data-driven machine learning interface designed for materials design and analysis.The platform integrates advanced computational techniques,including natural language processing(NLP),large language models(LLM),machine learning potentials(MLP),and graph neural networks(GNN),to facili-tate materials discovery.The platform has been applied in diverse materials research areas,including perovskite surface design,catalyst discovery,battery materials screening,structural alloy design,and molecular informatics.By automating feature selection,predictive modeling,and result interpretation,NJmat accelerates the development of high-performance materials across energy storage,conversion,and structural applications.Additionally,NJmat serves as an educational tool,allowing students and researchers to apply machine learning techniques in materials science with minimal coding expertise.Through automated feature extraction,genetic algorithms,and interpretable machine learning models,NJmat simplifies the workflow for materials informatics,bridging the gap between AI and experimental materials research.The latest version(available at https://figshare.com/articles/software/NJmatML/24607893(accessed on 01 January 2025))enhances its functionality by incorporating NJmatNLP,a module leveraging language models like MatBERT and those based on Word2Vec to support materials prediction tasks.By utilizing clustering and cosine similarity analysis with UMAP visualization,NJmat enables intuitive exploration of materials datasets.While NJmat primarily focuses on structure-property relationships and the discovery of novel chemistries,it can also assist in optimizing processing conditions when relevant parameters are included in the training data.By providing an accessible,integrated environment for machine learning-driven materials discovery,NJmat aligns with the objectives of the Materials Genome Initiative and promotes broader adoption of AI techniques in materials science.
基金supported by the researcher supporting Project number(RSPD2025R636),King Saud University,Riyadh,Saudi Arabia.
文摘Image watermarking is a powerful tool for media protection and can provide promising results when combined with other defense mechanisms.Image watermarking can be used to protect the copyright of digital media by embedding a unique identifier that identifies the owner of the content.Image watermarking can also be used to verify the authenticity of digital media,such as images or videos,by ascertaining the watermark information.In this paper,a mathematical chaos-based image watermarking technique is proposed using discrete wavelet transform(DWT),chaotic map,and Laplacian operator.The DWT can be used to decompose the image into its frequency components,chaos is used to provide extra security defense by encrypting the watermark signal,and the Laplacian operator with optimization is applied to the mid-frequency bands to find the sharp areas in the image.These mid-frequency bands are used to embed the watermarks by modifying the coefficients in these bands.The mid-sub-band maintains the invisible property of the watermark,and chaos combined with the second-order derivative Laplacian is vulnerable to attacks.Comprehensive experiments demonstrate that this approach is effective for common signal processing attacks,i.e.,compression,noise addition,and filtering.Moreover,this approach also maintains image quality through peak signal-to-noise ratio(PSNR)and structural similarity index metrics(SSIM).The highest achieved PSNR and SSIM values are 55.4 dB and 1.In the same way,normalized correlation(NC)values are almost 10%–20%higher than comparative research.These results support assistance in copyright protection in multimedia content.