目的比较经脐单孔与三孔法腹腔镜胆囊切除术的疗效,探讨经脐单孔腹腔镜胆囊切除术(transumbilical single port laparoscopic cholecystectomy,TUSPLC)的可行性和有效性。方法回顾性分析上海市松江区泗泾医院普外科2020年3月至2021年11...目的比较经脐单孔与三孔法腹腔镜胆囊切除术的疗效,探讨经脐单孔腹腔镜胆囊切除术(transumbilical single port laparoscopic cholecystectomy,TUSPLC)的可行性和有效性。方法回顾性分析上海市松江区泗泾医院普外科2020年3月至2021年11月接受TUSPLC的51例患者及传统三孔法腹腔镜胆囊切除术(laparoscopic cholecystectomy,LC)的51例患者的临床资料。结果TUSPLC组手术时间显著长于传统三孔法LC组[(49.4±13.3)min vs(31.2±11.5)min,P=0.01];TUSPLC组增加操作孔的患者比例高于传统三孔法LC组(9.80%vs 1.96%,P=0.02)。TUSPLC组较传统三孔法LC组术后镇痛药物使用率更低(7.8%vs 33.3%,P=0.01),术后腹壁瘢痕满意度评分更高[(3.88±0.11)分vs(2.75±0.31)分,P=0.01]。TUSPLC组术中出血量、术后并发症及住院天数与传统三孔法LC组差异无统计学意义。结论与传统三孔法LC相比,TUSPLC创伤小、术后切口隐匿美观、患者满意程度高,更符合经自然腔道内镜手术(natural orifice transluminal endoscopic surgery,NOTES)理念,可在合适的患者中推广。展开更多
Existing solutions do not work well when multi-targets coexist in a sentence.The reason is that the existing solution is usually to separate multiple targets and process them separately.If the original sentence has N ...Existing solutions do not work well when multi-targets coexist in a sentence.The reason is that the existing solution is usually to separate multiple targets and process them separately.If the original sentence has N target,the original sentence will be repeated for N times,and only one target will be processed each time.To some extent,this approach degenerates the fine-grained sentiment classification task into the sentence-level sentiment classification task,and the research method of processing the target separately ignores the internal relation and interaction between the targets.Based on the above considerations,we proposes to use Graph Convolutional Network(GCN)to model and process multi-targets appearing in sentences at the same time based on the positional relationship,and then to construct a graph of the sentiment relationship between targets based on the difference of the sentiment polarity between target words.In addition to the standard target-dependent sentiment classification task,an auxiliary node relation classification task is constructed.Experiments demonstrate that our model achieves new comparable performance on the benchmark datasets:SemEval-2014 Task 4,i.e.,reviews for restaurants and laptops.Furthermore,the method of dividing the target words into isolated individuals has disadvantages,and the multi-task learning model is beneficial to enhance the feature extraction ability and expression ability of the model.展开更多
基金This study was supported in part by the Research Innovation Team Fund(Award No.18TD0026)from the Department of Educationin part by the Sichuan Key Research&Development Project(Project No.2020YFG0168)from the Science Technology Department,Sichuan Province.
文摘Existing solutions do not work well when multi-targets coexist in a sentence.The reason is that the existing solution is usually to separate multiple targets and process them separately.If the original sentence has N target,the original sentence will be repeated for N times,and only one target will be processed each time.To some extent,this approach degenerates the fine-grained sentiment classification task into the sentence-level sentiment classification task,and the research method of processing the target separately ignores the internal relation and interaction between the targets.Based on the above considerations,we proposes to use Graph Convolutional Network(GCN)to model and process multi-targets appearing in sentences at the same time based on the positional relationship,and then to construct a graph of the sentiment relationship between targets based on the difference of the sentiment polarity between target words.In addition to the standard target-dependent sentiment classification task,an auxiliary node relation classification task is constructed.Experiments demonstrate that our model achieves new comparable performance on the benchmark datasets:SemEval-2014 Task 4,i.e.,reviews for restaurants and laptops.Furthermore,the method of dividing the target words into isolated individuals has disadvantages,and the multi-task learning model is beneficial to enhance the feature extraction ability and expression ability of the model.