Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices ...Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.展开更多
This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By ...This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By incorporating structured knowledge, the methodology enhances LLMs’ reasoning abilities, enabling more accurate and efficient handling of complex tasks. Integration with open APIs allows LLMs to access external services and real-time data, expanding their functionality and application range. Through real-world case studies, we demonstrate that this approach significantly improves the efficiency and adaptability of LLM-based applications, especially for time-sensitive tasks. Our methodology provides practical guidelines for developers to rapidly create robust and adaptable LLM applications capable of navigating dynamic information environments and performing effectively across diverse tasks.展开更多
In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pa...In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.展开更多
Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart cont...Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem.展开更多
电网灾害风险预警主要采用数据特征边缘分析得出风险预警结果,忽略了理论差值对预警准确度的影响,导致预警频次一致性低。为此,提出一种基于Web地理信息系统(geographic information system,GIS)技术的电网灾害风险预警模型构建方法。基...电网灾害风险预警主要采用数据特征边缘分析得出风险预警结果,忽略了理论差值对预警准确度的影响,导致预警频次一致性低。为此,提出一种基于Web地理信息系统(geographic information system,GIS)技术的电网灾害风险预警模型构建方法。基于WebGIS技术建立风险预警指标体系,分析灾害点距离作为原始特征,构建电网灾害风险预警模型,计算理论差值损失值优化模型参数,迭代输出模型结果,结合电网设备可靠度,定义风险预警等级,实现电网灾害风险预警。实验结果表明,所提模型方法表现出的预警频次一致性较高,预警结果准确度较高,满足了电网运维与灾害防护工作的现实需求。展开更多
In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling...In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling, combining semantic web technologies, was proposed. A general configuration ontology was developed to provide a common concept structure for modeling configuration knowledge and rules of specific product domains. The OWL web ontology language and semantic web rule language (SWRL) were used to formally represent the configuration ontology, domain configuration knowledge and rules to enhance the consistency, maintainability and reusability of all the configuration knowledge. The configuration knowledge modeling of a customizable personal computer family shows that the approach can provide explicit, computerunderstandable knowledge semantics for specific product configuration domains and can efficiently support automatic configuration tasks of complex products.展开更多
一引言
90年代初,一种新型的计算机网络应用技术--电子数据交换EDI(Electronic Data Interchange)以其特有的简洁、高效、安全和迅捷特性引起世界各国的高度重视,被认为是提高工作效率、服务质量和企业竞争能力的强有力的手段[1].EDI旨...一引言
90年代初,一种新型的计算机网络应用技术--电子数据交换EDI(Electronic Data Interchange)以其特有的简洁、高效、安全和迅捷特性引起世界各国的高度重视,被认为是提高工作效率、服务质量和企业竞争能力的强有力的手段[1].EDI旨在实现表单传送的电子化,所以有人称EDI为无纸化贸易.使用电子表单的同时仍然需要纸张表单辅助,只是纸张表单从先前的主要或唯一的地位,下降到次要和辅助的地位.也就是说,EDI最重要的意义不在于节约纸张,而在于其快速、避免重复劳动、提高效率、节约成本等方面,因此EDI技术的实质是强调快速传输(比如从邮寄的几天变成几分钟甚至实时)、节约劳动(不必反复打印和录入表单),从而提高效率和节约成本.展开更多
基金This work has been funded by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project Number(RSPD2024R857).
文摘Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
文摘This paper presents a reference methodology for process orchestration that accelerates the development of Large Language Model (LLM) applications by integrating knowledge bases, API access, and deep web retrieval. By incorporating structured knowledge, the methodology enhances LLMs’ reasoning abilities, enabling more accurate and efficient handling of complex tasks. Integration with open APIs allows LLMs to access external services and real-time data, expanding their functionality and application range. Through real-world case studies, we demonstrate that this approach significantly improves the efficiency and adaptability of LLM-based applications, especially for time-sensitive tasks. Our methodology provides practical guidelines for developers to rapidly create robust and adaptable LLM applications capable of navigating dynamic information environments and performing effectively across diverse tasks.
文摘In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.
文摘Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem.
文摘电网灾害风险预警主要采用数据特征边缘分析得出风险预警结果,忽略了理论差值对预警准确度的影响,导致预警频次一致性低。为此,提出一种基于Web地理信息系统(geographic information system,GIS)技术的电网灾害风险预警模型构建方法。基于WebGIS技术建立风险预警指标体系,分析灾害点距离作为原始特征,构建电网灾害风险预警模型,计算理论差值损失值优化模型参数,迭代输出模型结果,结合电网设备可靠度,定义风险预警等级,实现电网灾害风险预警。实验结果表明,所提模型方法表现出的预警频次一致性较高,预警结果准确度较高,满足了电网运维与灾害防护工作的现实需求。
基金The National Natural Science Foundation of China(No.70471023).
文摘In order to solve the problem of modeling product configuration knowledge at the semantic level to successfully implement the mass customization strategy, an approach of ontology-based configuration knowledge modeling, combining semantic web technologies, was proposed. A general configuration ontology was developed to provide a common concept structure for modeling configuration knowledge and rules of specific product domains. The OWL web ontology language and semantic web rule language (SWRL) were used to formally represent the configuration ontology, domain configuration knowledge and rules to enhance the consistency, maintainability and reusability of all the configuration knowledge. The configuration knowledge modeling of a customizable personal computer family shows that the approach can provide explicit, computerunderstandable knowledge semantics for specific product configuration domains and can efficiently support automatic configuration tasks of complex products.
文摘一引言
90年代初,一种新型的计算机网络应用技术--电子数据交换EDI(Electronic Data Interchange)以其特有的简洁、高效、安全和迅捷特性引起世界各国的高度重视,被认为是提高工作效率、服务质量和企业竞争能力的强有力的手段[1].EDI旨在实现表单传送的电子化,所以有人称EDI为无纸化贸易.使用电子表单的同时仍然需要纸张表单辅助,只是纸张表单从先前的主要或唯一的地位,下降到次要和辅助的地位.也就是说,EDI最重要的意义不在于节约纸张,而在于其快速、避免重复劳动、提高效率、节约成本等方面,因此EDI技术的实质是强调快速传输(比如从邮寄的几天变成几分钟甚至实时)、节约劳动(不必反复打印和录入表单),从而提高效率和节约成本.