新升级的华硕TRANSFORMER BOOK T100HA.变形模式更灵巧,磁力吸附式接口设计,经过2万次插拨测试与2.5万次开合测试,笔电平板一秒切换。作为全球最薄10.1英寸WINDOWS平板电脑,升级后的华硕TRANSFORMER B00K T100HA平板部分仅有8.4...新升级的华硕TRANSFORMER BOOK T100HA.变形模式更灵巧,磁力吸附式接口设计,经过2万次插拨测试与2.5万次开合测试,笔电平板一秒切换。作为全球最薄10.1英寸WINDOWS平板电脑,升级后的华硕TRANSFORMER B00K T100HA平板部分仅有8.45毫米,整机约1公斤。较上代TIOOHA更轻薄20%,和一支HB铅笔的厚度相当。展开更多
Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveill...Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveillance,biometric authentication,and human-computer interaction.This paper provides a comprehensive review of face detection techniques developed to handle occluded faces.Studies are categorized into four main approaches:feature-based,machine learning-based,deep learning-based,and hybrid methods.We analyzed state-of-the-art studies within each category,examining their methodologies,strengths,and limitations based on widely used benchmark datasets,highlighting their adaptability to partial and severe occlusions.The review also identifies key challenges,including dataset diversity,model generalization,and computational efficiency.Our findings reveal that deep learning methods dominate recent studies,benefiting from their ability to extract hierarchical features and handle complex occlusion patterns.More recently,researchers have increasingly explored Transformer-based architectures,such as Vision Transformer(ViT)and Swin Transformer,to further improve detection robustness under challenging occlusion scenarios.In addition,hybrid approaches,which aim to combine traditional andmodern techniques,are emerging as a promising direction for improving robustness.This review provides valuable insights for researchers aiming to develop more robust face detection systems and for practitioners seeking to deploy reliable solutions in real-world,occlusionprone environments.Further improvements and the proposal of broader datasets are required to developmore scalable,robust,and efficient models that can handle complex occlusions in real-world scenarios.展开更多
Face detection is a critical component inmodern security,surveillance,and human-computer interaction systems,with widespread applications in smartphones,biometric access control,and public monitoring.However,detecting...Face detection is a critical component inmodern security,surveillance,and human-computer interaction systems,with widespread applications in smartphones,biometric access control,and public monitoring.However,detecting faces with high levels of occlusion,such as those covered by masks,veils,or scarves,remains a significant challenge,as traditional models often fail to generalize under such conditions.This paper presents a hybrid approach that combines traditional handcrafted feature extraction technique called Histogram of Oriented Gradients(HOG)and Canny edge detection with modern deep learning models.The goal is to improve face detection accuracy under occlusions.The proposed method leverages the structural strengths of HOG and edge-based object proposals while exploiting the feature extraction capabilities of Convolutional Neural Networks(CNNs).The effectiveness of the proposed model is assessed using a custom dataset containing 10,000 heavily occluded face images and a subset of the Common Objects in Context(COCO)dataset for non-face samples.The COCO dataset was selected for its variety and realism in background contexts.Experimental evaluations demonstrate significant performance improvements compared to baseline CNN models.Results indicate that DenseNet121 combined with HOG outperforms other counterparts in classification metrics with an F1-score of 87.96%and precision of 88.02%.Enhanced performance is achieved through reduced false positives and improved localization accuracy with the integration of object proposals based on Canny and contour detection.While the proposed method increases inference time from 33.52 to 97.80 ms,it achieves a notable improvement in precision from 80.85% to 88.02% when comparing the baseline DenseNet121 model to its hybrid counterpart.Limitations of the method include higher computational cost and the need for careful tuning of parameters across the edge detection,handcrafted features,and CNN components.These findings highlight the potential of combining handcrafted and learned features for occluded face detection tasks.展开更多
The implementation of Countermeasure Techniques(CTs)in the context of Network-On-Chip(NoC)based Multiprocessor System-On-Chip(MPSoC)routers against the Flooding Denial-of-Service Attack(F-DoSA)falls under Multi-Criter...The implementation of Countermeasure Techniques(CTs)in the context of Network-On-Chip(NoC)based Multiprocessor System-On-Chip(MPSoC)routers against the Flooding Denial-of-Service Attack(F-DoSA)falls under Multi-Criteria Decision-Making(MCDM)due to the three main concerns,called:traffic variations,multiple evaluation criteria-based traffic features,and prioritization NoC routers as an alternative.In this study,we propose a comprehensive evaluation of various NoC traffic features to identify the most efficient routers under the F-DoSA scenarios.Consequently,an MCDM approach is essential to address these emerging challenges.While the recent MCDM approach has some issues,such as uncertainty,this study utilizes Fuzzy-Weighted Zero-Inconsistency(FWZIC)to estimate the criteria weight values and Fuzzy Decision by Opinion Score Method(FDOSM)for ranking the routers with fuzzy Single-valued Neutrosophic under names(SvN-FWZIC and SvN-FDOSM)to overcome the ambiguity.The results obtained by using the SvN-FWZIC method indicate that the Max packet count has the highest importance among the evaluated criteria,with a weighted score of 0.1946.In contrast,the Hop count is identified as the least significant criterion,with a weighted score of 0.1090.The remaining criteria fall within a range of intermediate importance,with enqueue time scoring 0.1845,packet count decremented and traversal index scoring 0.1262,packet count incremented scoring 0.1124,and packet count index scoring 0.1472.In terms of ranking,SvN-FDOSM has two approaches:individual and group.Both the individual and group ranking processes show that(Router 4)is the most effective router,while(Router 3)is the lowest router under F-DoSA.The sensitivity analysis provides a high stability in ranking among all 10 scenarios.This approach offers essential feedback in making proper decisions in the design of countermeasure techniques in the domain of NoC-based MPSoC.展开更多
We discuss the solution of complex multistage decision problems using methods that are based on the idea of policy iteration(PI),i.e.,start from some base policy and generate an improved policy.Rollout is the simplest...We discuss the solution of complex multistage decision problems using methods that are based on the idea of policy iteration(PI),i.e.,start from some base policy and generate an improved policy.Rollout is the simplest method of this type,where just one improved policy is generated.We can view PI as repeated application of rollout,where the rollout policy at each iteration serves as the base policy for the next iteration.In contrast with PI,rollout has a robustness property:it can be applied on-line and is suitable for on-line replanning.Moreover,rollout can use as base policy one of the policies produced by PI,thereby improving on that policy.This is the type of scheme underlying the prominently successful Alpha Zero chess program.In this paper we focus on rollout and PI-like methods for problems where the control consists of multiple components each selected(conceptually)by a separate agent.This is the class of multiagent problems where the agents have a shared objective function,and a shared and perfect state information.Based on a problem reformulation that trades off control space complexity with state space complexity,we develop an approach,whereby at every stage,the agents sequentially(one-at-a-time)execute a local rollout algorithm that uses a base policy,together with some coordinating information from the other agents.The amount of total computation required at every stage grows linearly with the number of agents.By contrast,in the standard rollout algorithm,the amount of total computation grows exponentially with the number of agents.Despite the dramatic reduction in required computation,we show that our multiagent rollout algorithm has the fundamental cost improvement property of standard rollout:it guarantees an improved performance relative to the base policy.We also discuss autonomous multiagent rollout schemes that allow the agents to make decisions autonomously through the use of precomputed signaling information,which is sufficient to maintain the cost improvement property,without any on-line coordination of control selection between the agents.For discounted and other infinite horizon problems,we also consider exact and approximate PI algorithms involving a new type of one-agent-at-a-time policy improvement operation.For one of our PI algorithms,we prove convergence to an agentby-agent optimal policy,thus establishing a connection with the theory of teams.For another PI algorithm,which is executed over a more complex state space,we prove convergence to an optimal policy.Approximate forms of these algorithms are also given,based on the use of policy and value neural networks.These PI algorithms,in both their exact and their approximate form are strictly off-line methods,but they can be used to provide a base policy for use in an on-line multiagent rollout scheme.展开更多
The paper studies reachability problem of autonomous ai^ine systems on n-dimensional polytopes.Our goal is to obtain both the largest positive invariant set in the polytope and the backward reachable set(the attractio...The paper studies reachability problem of autonomous ai^ine systems on n-dimensional polytopes.Our goal is to obtain both the largest positive invariant set in the polytope and the backward reachable set(the attraction domain)of each facet.Special attention is paid to the largest stable invariant affine subspace.After presenting several useful properties of those sets,a partition procedure is given to determine the largest positive invariant set in the polytope and all the attraction domains of facets.展开更多
The purpose of this review and commentary was to provide an historical and evidence-based account of organic acids and the biochemical and organic chemistry evidence for why cells do not produce metabolites that are a...The purpose of this review and commentary was to provide an historical and evidence-based account of organic acids and the biochemical and organic chemistry evidence for why cells do not produce metabolites that are acids.The scientific study of acids has a long history dating to the 16th and 17th centuries,and the definition of an acid was proposed in 1884 as a molecule that when in an aqueous solution releases a hydrogen ion(H^(+)).There are three common ionizable functional groups for molecules classified as acids:1)the carboxyl group,2)the phos-phoryl group and 3)the amine group.The propensity by which a cation will associate or dissociate with a negatively charged atom is quantified by the equilibrium constant(K_(eq))of the dissociation constant(K_(d))of the ionization(K_(eq)=K_(d)),which for lactic acid(HLa)vs.lactate(La^(-))is expressed as:K_(eq)=K_(d)=[H^(+)][La^(-)]/[HLa]=4677.3514(ionic strength=0.01 Mol⋅L^(-1),T=25℃).The negative log10 of the dissociation pKd reveals the pH at which half of the molecules are ionized,which for HLa=3.67.Thus,knowing the pKd and the pH of the solution at question will reveal the extent of the ionization vs.acidification of molecules that are classified as acids.展开更多
In the current cloud-based Internet-of-Things (IoT) model, smart devices (such as sensors, smartphones) exchange information through the Internet to cooperate and provide services to users, which could be citizens...In the current cloud-based Internet-of-Things (IoT) model, smart devices (such as sensors, smartphones) exchange information through the Internet to cooperate and provide services to users, which could be citizens, smart home systems, and industrial applications.展开更多
The purpose of this manuscript was to present the evidence for why cells do not produce metabolic acids.In addition,evidence that opposes common viewpoints and arguments used to support the cellular production of lact...The purpose of this manuscript was to present the evidence for why cells do not produce metabolic acids.In addition,evidence that opposes common viewpoints and arguments used to support the cellular production of lactic acid(HLa)or liver keto-acids have been provided.Organic chemistry reveals that many molecules involved in cellular energy catabolism contain functional groups classified as acids.The two main acidic functional groups of these molecules susceptible to-H^(+) þrelease are the carboxyl and phosphoryl structures,though the biochemistry and organic chemistry of molecules having these structures reveal they are produced in a non-acidic ionic(negatively charged)structure,thereby preventing pH dependent -H^(+)þrelease.Added evidence from the industrial production of HLa further reveals that lactate(La)is produced followed by an acidification step that converts La to HLa due to pH dependent-H^(+)þassociation.Interestingly,there is a plentiful list of other molecules that are classified as acids and compared to HLa have similar values for their Hþdissociation constant(pKd).For many metabolic conditions,the cumulative turnover of these molecules is far higher than for La.The collective evidence documents the non-empirical basis for the construct of the cellular production of HLa,or any other metabolic acid.展开更多
Artificial intelligence(AI)and robotics have gone through three generations of development,from Turing test,logic theory machine,to expert system and self-driving car.In the third-generation today,AI and robotics have...Artificial intelligence(AI)and robotics have gone through three generations of development,from Turing test,logic theory machine,to expert system and self-driving car.In the third-generation today,AI and robotics have collaboratively been used in many areas in our society,including industry,business,manufacture,research,and education.There are many challenging problems in developing AI and robotics applications.We launch this new Journal of Artificial Intelligence and Technology to facilitate the exchange of the latest research and practice in AI and technologies.In this inaugural issue,we first introduce a few key technologies and platforms supporting the third-generation AI and robotics application development based on stacks of technologies and platforms.We present examples of such development environments created by both industry and academia.We also selected eight papers in the related areas to celebrate the foundation of this journal.展开更多
Fifth-generation(5G)cellular networks offer high transmission rates in dense urban environments.However,a massive deployment of small cells will be required to provide wide-area coverage,which leads to an increase in ...Fifth-generation(5G)cellular networks offer high transmission rates in dense urban environments.However,a massive deployment of small cells will be required to provide wide-area coverage,which leads to an increase in the number of handovers(HOs).Mobility management is an important issue that requires considerable attention in heterogeneous networks,where 5G ultra-dense small cells coexist with current fourth-generation(4G)networks.Although mobility robustness optimization(MRO)and load balancing optimization(LBO)functions have been introduced in the 3GPP standard to address HO problems,non-robust and nonoptimal algorithms for selecting appropriate HO control parameters(HCPs)still exist,and an optimal solution is subjected to compromise between LBO and MRO functions.Thus,HO decision algorithms become inefficient.This paper proposes a conflict resolution technique to address the contradiction between MRO and LBO functions.The proposed technique exploits received signal reference power(RSRP),cell load and user speed to adapt HO margin(HM)and time to trigger(TTT).Estimated HM and TTT depend on a weighting function and HO type which is represented by user status during mobility.The proposed technique is validated with other existing algorithms from the literature.Simulation results demonstrate that the proposed technique outperforms existing algorithms overall performance metrics.The proposed technique reduces the overall average HO ping-pong probability,HO failure rate and interruption time by more than 90%,46%and 58%,respectively,compared with the other schemes overall speed scenarios and simulation time.展开更多
This study is about Malaysia's investment environment. I've undertaken its writing in part as a reflection on my own involvement with Malaysia over more than halfa century (from Malaya in 1963). The study also bri...This study is about Malaysia's investment environment. I've undertaken its writing in part as a reflection on my own involvement with Malaysia over more than halfa century (from Malaya in 1963). The study also brings to bear a structure for analysis drawn from the field of political risk analysis. I have been involved with formal (corporate) political risk assessment since 1979 and bring that experience into the discussion that follows. I have published extensively on both Malaysia and political risk. Some of these publications are cited below. Political risk assessment depends on experts on the countries that they examine. 1 don't usually refer to myself as an "expert" but rather as a specialist. However, the common reference in political risk studies is to data generated by experts. In the paper below I discuss the nature of political risk assessment, Malaysia, my own credentials that have gotten me into the political risk business, and three political risk assessment methodologies, with the results for Malaysia for each. I give emphasis to the assessment that I have done using the Economist method, for reasons that I provide below. I was able to incorporate interviews of 35 professional subjects in Malaysia in February 2014 in which they were each able to rate Malaysia using the Economist method. They were drawn from government, business, journalism, and academe. I think the results are interesting, at least.展开更多
文摘新升级的华硕TRANSFORMER BOOK T100HA.变形模式更灵巧,磁力吸附式接口设计,经过2万次插拨测试与2.5万次开合测试,笔电平板一秒切换。作为全球最薄10.1英寸WINDOWS平板电脑,升级后的华硕TRANSFORMER B00K T100HA平板部分仅有8.45毫米,整机约1公斤。较上代TIOOHA更轻薄20%,和一支HB铅笔的厚度相当。
基金funded by A’Sharqiyah University,Sultanate of Oman,under Research Project grant number(BFP/RGP/ICT/22/490).
文摘Detecting faces under occlusion remains a significant challenge in computer vision due to variations caused by masks,sunglasses,and other obstructions.Addressing this issue is crucial for applications such as surveillance,biometric authentication,and human-computer interaction.This paper provides a comprehensive review of face detection techniques developed to handle occluded faces.Studies are categorized into four main approaches:feature-based,machine learning-based,deep learning-based,and hybrid methods.We analyzed state-of-the-art studies within each category,examining their methodologies,strengths,and limitations based on widely used benchmark datasets,highlighting their adaptability to partial and severe occlusions.The review also identifies key challenges,including dataset diversity,model generalization,and computational efficiency.Our findings reveal that deep learning methods dominate recent studies,benefiting from their ability to extract hierarchical features and handle complex occlusion patterns.More recently,researchers have increasingly explored Transformer-based architectures,such as Vision Transformer(ViT)and Swin Transformer,to further improve detection robustness under challenging occlusion scenarios.In addition,hybrid approaches,which aim to combine traditional andmodern techniques,are emerging as a promising direction for improving robustness.This review provides valuable insights for researchers aiming to develop more robust face detection systems and for practitioners seeking to deploy reliable solutions in real-world,occlusionprone environments.Further improvements and the proposal of broader datasets are required to developmore scalable,robust,and efficient models that can handle complex occlusions in real-world scenarios.
基金funded by A’Sharqiyah University,Sultanate of Oman,under Research Project Grant Number(BFP/RGP/ICT/22/490).
文摘Face detection is a critical component inmodern security,surveillance,and human-computer interaction systems,with widespread applications in smartphones,biometric access control,and public monitoring.However,detecting faces with high levels of occlusion,such as those covered by masks,veils,or scarves,remains a significant challenge,as traditional models often fail to generalize under such conditions.This paper presents a hybrid approach that combines traditional handcrafted feature extraction technique called Histogram of Oriented Gradients(HOG)and Canny edge detection with modern deep learning models.The goal is to improve face detection accuracy under occlusions.The proposed method leverages the structural strengths of HOG and edge-based object proposals while exploiting the feature extraction capabilities of Convolutional Neural Networks(CNNs).The effectiveness of the proposed model is assessed using a custom dataset containing 10,000 heavily occluded face images and a subset of the Common Objects in Context(COCO)dataset for non-face samples.The COCO dataset was selected for its variety and realism in background contexts.Experimental evaluations demonstrate significant performance improvements compared to baseline CNN models.Results indicate that DenseNet121 combined with HOG outperforms other counterparts in classification metrics with an F1-score of 87.96%and precision of 88.02%.Enhanced performance is achieved through reduced false positives and improved localization accuracy with the integration of object proposals based on Canny and contour detection.While the proposed method increases inference time from 33.52 to 97.80 ms,it achieves a notable improvement in precision from 80.85% to 88.02% when comparing the baseline DenseNet121 model to its hybrid counterpart.Limitations of the method include higher computational cost and the need for careful tuning of parameters across the edge detection,handcrafted features,and CNN components.These findings highlight the potential of combining handcrafted and learned features for occluded face detection tasks.
文摘The implementation of Countermeasure Techniques(CTs)in the context of Network-On-Chip(NoC)based Multiprocessor System-On-Chip(MPSoC)routers against the Flooding Denial-of-Service Attack(F-DoSA)falls under Multi-Criteria Decision-Making(MCDM)due to the three main concerns,called:traffic variations,multiple evaluation criteria-based traffic features,and prioritization NoC routers as an alternative.In this study,we propose a comprehensive evaluation of various NoC traffic features to identify the most efficient routers under the F-DoSA scenarios.Consequently,an MCDM approach is essential to address these emerging challenges.While the recent MCDM approach has some issues,such as uncertainty,this study utilizes Fuzzy-Weighted Zero-Inconsistency(FWZIC)to estimate the criteria weight values and Fuzzy Decision by Opinion Score Method(FDOSM)for ranking the routers with fuzzy Single-valued Neutrosophic under names(SvN-FWZIC and SvN-FDOSM)to overcome the ambiguity.The results obtained by using the SvN-FWZIC method indicate that the Max packet count has the highest importance among the evaluated criteria,with a weighted score of 0.1946.In contrast,the Hop count is identified as the least significant criterion,with a weighted score of 0.1090.The remaining criteria fall within a range of intermediate importance,with enqueue time scoring 0.1845,packet count decremented and traversal index scoring 0.1262,packet count incremented scoring 0.1124,and packet count index scoring 0.1472.In terms of ranking,SvN-FDOSM has two approaches:individual and group.Both the individual and group ranking processes show that(Router 4)is the most effective router,while(Router 3)is the lowest router under F-DoSA.The sensitivity analysis provides a high stability in ranking among all 10 scenarios.This approach offers essential feedback in making proper decisions in the design of countermeasure techniques in the domain of NoC-based MPSoC.
文摘We discuss the solution of complex multistage decision problems using methods that are based on the idea of policy iteration(PI),i.e.,start from some base policy and generate an improved policy.Rollout is the simplest method of this type,where just one improved policy is generated.We can view PI as repeated application of rollout,where the rollout policy at each iteration serves as the base policy for the next iteration.In contrast with PI,rollout has a robustness property:it can be applied on-line and is suitable for on-line replanning.Moreover,rollout can use as base policy one of the policies produced by PI,thereby improving on that policy.This is the type of scheme underlying the prominently successful Alpha Zero chess program.In this paper we focus on rollout and PI-like methods for problems where the control consists of multiple components each selected(conceptually)by a separate agent.This is the class of multiagent problems where the agents have a shared objective function,and a shared and perfect state information.Based on a problem reformulation that trades off control space complexity with state space complexity,we develop an approach,whereby at every stage,the agents sequentially(one-at-a-time)execute a local rollout algorithm that uses a base policy,together with some coordinating information from the other agents.The amount of total computation required at every stage grows linearly with the number of agents.By contrast,in the standard rollout algorithm,the amount of total computation grows exponentially with the number of agents.Despite the dramatic reduction in required computation,we show that our multiagent rollout algorithm has the fundamental cost improvement property of standard rollout:it guarantees an improved performance relative to the base policy.We also discuss autonomous multiagent rollout schemes that allow the agents to make decisions autonomously through the use of precomputed signaling information,which is sufficient to maintain the cost improvement property,without any on-line coordination of control selection between the agents.For discounted and other infinite horizon problems,we also consider exact and approximate PI algorithms involving a new type of one-agent-at-a-time policy improvement operation.For one of our PI algorithms,we prove convergence to an agentby-agent optimal policy,thus establishing a connection with the theory of teams.For another PI algorithm,which is executed over a more complex state space,we prove convergence to an optimal policy.Approximate forms of these algorithms are also given,based on the use of policy and value neural networks.These PI algorithms,in both their exact and their approximate form are strictly off-line methods,but they can be used to provide a base policy for use in an on-line multiagent rollout scheme.
基金Supported by National Natural Science Foundation of China(60504024)Zhejiang Provincial Natural Science Foundation of China(Y106010)the Specialized Research Fund for the Doctoral Program of Higher Education of China(20060335022)
文摘The paper studies reachability problem of autonomous ai^ine systems on n-dimensional polytopes.Our goal is to obtain both the largest positive invariant set in the polytope and the backward reachable set(the attraction domain)of each facet.Special attention is paid to the largest stable invariant affine subspace.After presenting several useful properties of those sets,a partition procedure is given to determine the largest positive invariant set in the polytope and all the attraction domains of facets.
文摘The purpose of this review and commentary was to provide an historical and evidence-based account of organic acids and the biochemical and organic chemistry evidence for why cells do not produce metabolites that are acids.The scientific study of acids has a long history dating to the 16th and 17th centuries,and the definition of an acid was proposed in 1884 as a molecule that when in an aqueous solution releases a hydrogen ion(H^(+)).There are three common ionizable functional groups for molecules classified as acids:1)the carboxyl group,2)the phos-phoryl group and 3)the amine group.The propensity by which a cation will associate or dissociate with a negatively charged atom is quantified by the equilibrium constant(K_(eq))of the dissociation constant(K_(d))of the ionization(K_(eq)=K_(d)),which for lactic acid(HLa)vs.lactate(La^(-))is expressed as:K_(eq)=K_(d)=[H^(+)][La^(-)]/[HLa]=4677.3514(ionic strength=0.01 Mol⋅L^(-1),T=25℃).The negative log10 of the dissociation pKd reveals the pH at which half of the molecules are ionized,which for HLa=3.67.Thus,knowing the pKd and the pH of the solution at question will reveal the extent of the ionization vs.acidification of molecules that are classified as acids.
文摘In the current cloud-based Internet-of-Things (IoT) model, smart devices (such as sensors, smartphones) exchange information through the Internet to cooperate and provide services to users, which could be citizens, smart home systems, and industrial applications.
文摘The purpose of this manuscript was to present the evidence for why cells do not produce metabolic acids.In addition,evidence that opposes common viewpoints and arguments used to support the cellular production of lactic acid(HLa)or liver keto-acids have been provided.Organic chemistry reveals that many molecules involved in cellular energy catabolism contain functional groups classified as acids.The two main acidic functional groups of these molecules susceptible to-H^(+) þrelease are the carboxyl and phosphoryl structures,though the biochemistry and organic chemistry of molecules having these structures reveal they are produced in a non-acidic ionic(negatively charged)structure,thereby preventing pH dependent -H^(+)þrelease.Added evidence from the industrial production of HLa further reveals that lactate(La)is produced followed by an acidification step that converts La to HLa due to pH dependent-H^(+)þassociation.Interestingly,there is a plentiful list of other molecules that are classified as acids and compared to HLa have similar values for their Hþdissociation constant(pKd).For many metabolic conditions,the cumulative turnover of these molecules is far higher than for La.The collective evidence documents the non-empirical basis for the construct of the cellular production of HLa,or any other metabolic acid.
文摘Artificial intelligence(AI)and robotics have gone through three generations of development,from Turing test,logic theory machine,to expert system and self-driving car.In the third-generation today,AI and robotics have collaboratively been used in many areas in our society,including industry,business,manufacture,research,and education.There are many challenging problems in developing AI and robotics applications.We launch this new Journal of Artificial Intelligence and Technology to facilitate the exchange of the latest research and practice in AI and technologies.In this inaugural issue,we first introduce a few key technologies and platforms supporting the third-generation AI and robotics application development based on stacks of technologies and platforms.We present examples of such development environments created by both industry and academia.We also selected eight papers in the related areas to celebrate the foundation of this journal.
基金The research leading to these results has received funding from The Research Council(TRC)of the Sultanate of Oman under the Block Funding Program with agreement no.TRC/BFP/ASU/01/2019,and it was also supported in part by the Universiti Sains Islam Malaysia(USIM),Malaysia.
文摘Fifth-generation(5G)cellular networks offer high transmission rates in dense urban environments.However,a massive deployment of small cells will be required to provide wide-area coverage,which leads to an increase in the number of handovers(HOs).Mobility management is an important issue that requires considerable attention in heterogeneous networks,where 5G ultra-dense small cells coexist with current fourth-generation(4G)networks.Although mobility robustness optimization(MRO)and load balancing optimization(LBO)functions have been introduced in the 3GPP standard to address HO problems,non-robust and nonoptimal algorithms for selecting appropriate HO control parameters(HCPs)still exist,and an optimal solution is subjected to compromise between LBO and MRO functions.Thus,HO decision algorithms become inefficient.This paper proposes a conflict resolution technique to address the contradiction between MRO and LBO functions.The proposed technique exploits received signal reference power(RSRP),cell load and user speed to adapt HO margin(HM)and time to trigger(TTT).Estimated HM and TTT depend on a weighting function and HO type which is represented by user status during mobility.The proposed technique is validated with other existing algorithms from the literature.Simulation results demonstrate that the proposed technique outperforms existing algorithms overall performance metrics.The proposed technique reduces the overall average HO ping-pong probability,HO failure rate and interruption time by more than 90%,46%and 58%,respectively,compared with the other schemes overall speed scenarios and simulation time.
文摘This study is about Malaysia's investment environment. I've undertaken its writing in part as a reflection on my own involvement with Malaysia over more than halfa century (from Malaya in 1963). The study also brings to bear a structure for analysis drawn from the field of political risk analysis. I have been involved with formal (corporate) political risk assessment since 1979 and bring that experience into the discussion that follows. I have published extensively on both Malaysia and political risk. Some of these publications are cited below. Political risk assessment depends on experts on the countries that they examine. 1 don't usually refer to myself as an "expert" but rather as a specialist. However, the common reference in political risk studies is to data generated by experts. In the paper below I discuss the nature of political risk assessment, Malaysia, my own credentials that have gotten me into the political risk business, and three political risk assessment methodologies, with the results for Malaysia for each. I give emphasis to the assessment that I have done using the Economist method, for reasons that I provide below. I was able to incorporate interviews of 35 professional subjects in Malaysia in February 2014 in which they were each able to rate Malaysia using the Economist method. They were drawn from government, business, journalism, and academe. I think the results are interesting, at least.