In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulat...In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.展开更多
Full-color imaging is essential in digital pathology for accurate tissue analysis.Utilizing advanced optical modulation and phase retrieval algorithms,Fourier ptychographic microscopy(FPM)offers a powerful solution fo...Full-color imaging is essential in digital pathology for accurate tissue analysis.Utilizing advanced optical modulation and phase retrieval algorithms,Fourier ptychographic microscopy(FPM)offers a powerful solution for high-throughput digital pathology,combining high resolution,large field of view,and extended depth of field(DOF).However,the full-color capabilities of FPM are hindered by coherent color artifacts and reduced computational efficiency,which significantly limits its practical applications.Color-transferbased FPM(CFPM)has emerged as a potential solution,theoretically reducing both acquisition and reconstruction threefold time.Yet,existing methods fall short of achieving the desired reconstruction speed and colorization quality.In this study,we report a generalized dual-color-space constrained model for FPM colorization.This model provides a mathematical framework for model-based FPM colorization,enabling a closed-form solution without the need for redundant iterative calculations.Our approach,termed generalized CFPM(gCFPM),achieves colorization within seconds for megapixel-scale images,delivering superior colorization quality in terms of both colorfulness and sharpness,along with an extended DOF.Both simulations and experiments demonstrate that gCFPM surpasses state-of-the-art methods across all evaluated criteria.Our work offers a robust and comprehensive workflow for high-throughput full-color pathological imaging using FPM platforms,laying a solid foundation for future advancements in methodology and engineering.展开更多
Colorization is the practice of adding appropriate chromatic values to monochrome photographs or videos.A real-valued luminance image can be mapped to a three-dimensional color image.However,it is a severely ill-defin...Colorization is the practice of adding appropriate chromatic values to monochrome photographs or videos.A real-valued luminance image can be mapped to a three-dimensional color image.However,it is a severely ill-defined problem and not has a single solution.In this paper,an encoder-decoder Convolutional Neural Network(CNN)model is used for colorizing gray images where the encoder is a Densely Connected Convolutional Network(DenseNet)and the decoder is a conventional CNN.The DenseNet extracts image features from gray images and the conventional CNN outputs a^(*)b^(*)color channels.Due to a large number of desaturated color components compared to saturated color components in the training images,the saturated color components have a strong tendency towards desaturated color components in the predicted a^(*)b^(*)channel.To solve the problems,we rebalance the predicted a^(*)b^(*)color channel by smoothing every subregion individually using the average filter.2 stage k-means clustering technique is applied to divide the subregions.Then we apply Gamma transformation in the entire a^(*)b^(*)channel to saturate the image.We compare our proposed method with several existing methods.From the experimental results,we see that our proposed method has made some notable improvements over the existing methods and color representation of gray-scale images by our proposed method is more plausible to visualize.Additionally,our suggested approach beats other approaches in terms of Peak Signal-to-Noise Ratio(PSNR),Structural Similarity Index Measure(SSIM)and Histogram.展开更多
A new local cost function is proposed in this paper based on the linear relationship assumption between the values of the color components and the intensity component in each local image window,then a new quadratic ob...A new local cost function is proposed in this paper based on the linear relationship assumption between the values of the color components and the intensity component in each local image window,then a new quadratic objective function is derived from it and the globally optimal chrominance values can be computed by solving a sparse linear system of equations.Through the colorization experiments on various test images,it is confirmed that the colorized images obtained by our proposed method have more vivid colors and sharper boundaries than those obtained by the traditional method.The peak signal to noise ratio(PSNR) of the colorized images and the average estimation error of the chrominance values relative to the original images also show that our proposed method gives more precise estimation than the traditional method.展开更多
The effect of single coloring agent Nd(NO3)3 on the crystallization, microstructure and colorization of Li2O-Al2O3-SiO2 (LAS) glass ceramics were investigated by differential thermal analysis (DTA), X-ray diffractomet...The effect of single coloring agent Nd(NO3)3 on the crystallization, microstructure and colorization of Li2O-Al2O3-SiO2 (LAS) glass ceramics were investigated by differential thermal analysis (DTA), X-ray diffractometry (XRD) and scanning electron microscopy (SEM). The introduction of little neodymium has no effect on the crystallization manner and the formation of main crystallization phase, but more neodymium will weaken the crystallization of LAS glass. Little neodymium can increase the glossiness of LAS glass ceramic, while more neodymium can color LAS glass with light purple or red purple. The colorability of neodymium for LAS glass ceramic decreases with the increase of crystallization temperature.展开更多
The decolorization of Direct Black 22 by Aspergillus ficuum has been studied. It was found that Aspergillus ficuum could effectively decolorize Direct Black 22 especially when grown as pelleted mycelia. Results showed...The decolorization of Direct Black 22 by Aspergillus ficuum has been studied. It was found that Aspergillus ficuum could effectively decolorize Direct Black 22 especially when grown as pelleted mycelia. Results showed that the media containing Direct Black 22 at 50 mg/L could be decolorized by 98.05% of the initial color in 24 h. The optimum pH and temperature of decolorization are 4.0 and 33 °C respectively. Aeration was quite beneficial to decolorization. Medium composition and the concentration of Direct Black 22 could affect the rate of decolorization. The dye degraded products assayed by UV-visible spectrophotometer and macroscopic observation showed that the decolorization of Direct Black 22 by mycelial pellets includes two important processes: bioadsorption and biodegradation. The degradation experiment agree with the Michaelis-Menten kinetics equation.展开更多
Image colorization is a classic and important topic in computer graphics,where the aim is to add color to a monochromatic input image to produce a colorful result.In this survey,we present the history of colorization ...Image colorization is a classic and important topic in computer graphics,where the aim is to add color to a monochromatic input image to produce a colorful result.In this survey,we present the history of colorization research in chronological order and summarize popular algorithms in this field.Early work on colorization mostly focused on developing techniques to improve the colorization quality.In the last few years,researchers have considered more possibilities such as combining colorization with NLP(natural language processing)and focused more on industrial applications.To better control the color,various types of color control are designed,such as providing reference images or color-scribbles.We have created a taxonomy of the colorization methods according to the input type,divided into grayscale,sketch-based and hybrid.The pros and cons are discussed for each algorithm,and they are compared according to their main characteristics.Finally,we discuss how deep learning,and in particular Generative Adversarial Networks(GANs),has changed this field.展开更多
Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as colo...Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as color guidance,particularly when colorizing related characters or an animation sequence.Reference-guided colorization is more intuitive than colorization with other hints,such as color points or scribbles,or text-based hints.Unfortunately,reference-guided colorization is challenging since the style of the colorized image should match the style of the reference image in terms of both global color composition and local color shading.In this paper,we propose a novel learning-based framework which colorizes a sketch based on a color style feature extracted from a reference color image.Our framework contains a color style extractor to extract the color feature from a color image,a colorization network to generate multi-scale output images by combining a sketch and a color feature,and a multi-scale discriminator to improve the reality of the output image.Extensive qualitative and quantitative evaluations show that our method outperforms existing methods,providing both superior visual quality and style reference consistency in the task of reference-based colorization.展开更多
This paper proposes a structure-aware nonlocal energy optimization framework for interactive image colo- rization with sparse scribbles. Our colorization technique propagates colors to both local intensity-continuous ...This paper proposes a structure-aware nonlocal energy optimization framework for interactive image colo- rization with sparse scribbles. Our colorization technique propagates colors to both local intensity-continuous regions and remote texture-similar regions without explicit image segmentation. We implement the nonlocal principle by computing k nearest neighbors in the high-dimensional feature space. The feature space contains not only image coordinates and intensities but also statistical texture features obtained with the direction-aligned Gabor wavelet filter. Structure maps are utilized to scale texture features to avoid artifacts along high-contrast boundaries. We show various experimental results and comparisons on image colorization, selective recoloring and decoloring, and progressive color editing to demonstrate the effectiveness of the proposed approach.展开更多
Colorization of gray-scale images has attracted many attentions for a long time. An important role of image color is the conveyer of emotions (through color themes). The colorization with an undesired color theme is...Colorization of gray-scale images has attracted many attentions for a long time. An important role of image color is the conveyer of emotions (through color themes). The colorization with an undesired color theme is less useful, even it is semantically correct. However this has been rarely considered. Automatic colorization respecting both the semantics and the emotions is undoubtedly a challenge. In this paper~ we propose a complete system for affective image colorization. We only need the user to assist object segmentation along with text labels and an affective word. First, the text labels along with other object characters are jointly used to filter the internet images to give each object a set of semantically correct reference images. Second, we select a set of color themes according to the affective word based on art theories. With these themes, a generic algorithm is used to select the best reference for each object, balancing various requirements. Finally, we propose a hybrid texture synthesis approach for colorization. To the best of our knowledge, it is the first system which is able to efficiently colorize a gray-scale image semantically by an emotionally controllable fashion. Our experiments show the effectiveness of our system, especially the benefit compared with the previous Markov random field (MRF) based method.展开更多
Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and...Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization.展开更多
Grayscale image colorization is an important computer graphics problem with a variety of applications. Recent fully automatic colorization methods have made impressive progress by formulating image colorization as a p...Grayscale image colorization is an important computer graphics problem with a variety of applications. Recent fully automatic colorization methods have made impressive progress by formulating image colorization as a pixel-wise prediction task and utilizing deep convolutional neural networks. Though tremendous improvements have been made, the result of automatic colorization is still far from perfect. Specifically, there still exist common pitfalls in maintaining color consistency in homogeneous regions as well as precisely distinguishing colors near region boundaries. To tackle these problems, we propose a novel fully automatic colorization pipeline which involves a boundary-guided CRF (conditional random field) and a CNN-based color transform as post-processing steps. In addition, as there usually exist multiple plausible colorization proposals for a single image, automatic evaluation for different colorization methods remains a challenging task. We further introduce two novel automatic evaluation schemes to efficiently assess colorization quality in terms of spatial coherence and localization. Comprehensive experiments demonstrate great quality improvement in results of our proposed colorization method under multiple evaluation metrics.展开更多
The automatic colorization of anime line drawings is a challenging problem in production pipelines.Recent advances in deep neural networks have addressed this problem;however,collectingmany images of colorization targ...The automatic colorization of anime line drawings is a challenging problem in production pipelines.Recent advances in deep neural networks have addressed this problem;however,collectingmany images of colorization targets in novel anime work before the colorization process starts leads to chicken-and-egg problems and has become an obstacle to using them in production pipelines.To overcome this obstacle,we propose a new patch-based learning method for few-shot anime-style colorization.The learning method adopts an efficient patch sampling technique with position embedding according to the characteristics of anime line drawings.We also present a continuous learning strategy that continuously updates our colorization model using new samples colorized by human artists.The advantage of our method is that it can learn our colorization model from scratch or pre-trained weights using only a few pre-and post-colorized line drawings that are created by artists in their usual colorization work.Therefore,our method can be easily incorporated within existing production pipelines.We quantitatively demonstrate that our colorizationmethod outperforms state-of-the-art methods.展开更多
Video colorization aims to add color to grayscale or monochrome videos.Although existing methods have achieved substantial and noteworthy results in the field of image colorization,video colorization presents more for...Video colorization aims to add color to grayscale or monochrome videos.Although existing methods have achieved substantial and noteworthy results in the field of image colorization,video colorization presents more formidable obstacles due to the additional necessity for temporal consistency.Moreover,there is rarely a systematic review of video colorization methods.In this paper,we aim to review existing state-of-the-art video colorization methods.In addition,maintaining spatial-temporal consistency is pivotal to the process of video colorization.To gain deeper insight into the evolution of existing methods in terms of spatial-temporal consistency,we further review video colorization methods from a novel perspective.Video colorization methods can be categorized into four main categories:optical-flow based methods,scribble-based methods,exemplar-based methods,and fully automatic methods.However,optical-flow based methods rely heavily on accurate optical-flow estimation,scribble-based methods require extensive user interaction and modifications,exemplar-based methods face challenges in obtaining suitable reference images,and fully automatic methods often struggle to meet specific colorization requirements.We also discuss the existing challenges and highlight several future research opportunities worth exploring.展开更多
A t-tone coloring of a graph assigns t distinct colors to each vertex with vertices at distance d having fewer than d colors in common.The t-tone chromatic number of a graph is the smallest number of colors used in al...A t-tone coloring of a graph assigns t distinct colors to each vertex with vertices at distance d having fewer than d colors in common.The t-tone chromatic number of a graph is the smallest number of colors used in all t-tone colorings of that graph.In this article,we study t-tone coloring of some finite planar lattices and obtain exact formulas for their t-tone chromatic number.展开更多
Video colorization encounters two principal challenges:colorization quality and temporalflicker.Balancing colorization quality and temporal consis-tency is a significant challenge.To address the aforementioned issues,we...Video colorization encounters two principal challenges:colorization quality and temporalflicker.Balancing colorization quality and temporal consis-tency is a significant challenge.To address the aforementioned issues,we employ the transformer model in thefield of video colorization and introduce a pioneer-ing video colorization method named VCTR.Initially,to guarantee the quality of video coloring,we employ a pretrained image coloring network to add color to the grayscale video frame.Next,the feature extraction module for VCTR is utilized to perform feature extraction and propagation on the color frames during the coloriza-tion process.Finally,the transformer module is designed to fully leverage local information via the local feature self-attention layer.Additionally,motion infor-mation from bidirectional opticalflow is utilized to identify correlations across video frames for feature fusion,guaranteeing both the coloring effect and tem-poral consistency.The experimental results demonstrate that VCTR outperforms existing methods on two publicly available datasets,namely,DAVIS and Videvo.VCTR attains the top ranking in the long-series dataset,Videvo,based on the CTBI(Colorization and Temporal-Consistency Balance Index).This achievement underscores VCTR’s ability to strike a commendable balance between colorization quality and temporal consistency.展开更多
Image coloring is an inherently uncertain and multimodal problem.By inputting a grayscale image into a coloring network,visually plausible colored photos can be generated.Conventional methods primarily rely on semanti...Image coloring is an inherently uncertain and multimodal problem.By inputting a grayscale image into a coloring network,visually plausible colored photos can be generated.Conventional methods primarily rely on semantic information for image colorization.These methods still suffer from color contamination and semantic confusion.This is largely due to the limited capacity of convolutional neural networks to learn deep semantic information inherent in images effectively.In this paper,we propose a network structure that addresses these limitations by leveraging multi-level semantic information classification and fusion.Additionally,we introduce a global semantic fusion network to combat the issues of color contamination.The proposed coloring encoder accurately extracts object-level semantic information from images.To further enhance visual plausibility,we employ a self-supervised adversarial training method.We train the network structure on various datasets with varying amounts of data and evaluate its performance using the ImageNet validation set and COCO validation set.Experimental results demonstrate that our proposed algorithm can generate more realistic images compared to previous approaches,showcasing its high generalization ability.展开更多
It is of great scientific significance to construct a 3D dynamic structural color with a special color effect based on the microlens array.However,the problems of imperfect mechanisms and poor color quality need to be...It is of great scientific significance to construct a 3D dynamic structural color with a special color effect based on the microlens array.However,the problems of imperfect mechanisms and poor color quality need to be solved.A method of 3D structural color turning on periodic metasurfaces fabricated by the microlens array and self-assembly technology was proposed in this study.In the experiment,Polydimethylsiloxane(PDMS)flexible film was used as a substrate,and SiO2 microspheres were scraped into grooves of the PDMS film to form 3D photonic crystal structures.By adjusting the number of blade-coated times and microsphere concentrations,high-saturation structural color micropatterns were obtained.These films were then matched with microlens arrays to produce dynamic graphics with iridescent effects.The results showed that by blade-coated two times and SiO2 microsphere concentrations of 50%are the best conditions.This method demonstrates the potential for being widely applied in the anticounterfeiting printing and ultra-high-resolution display.展开更多
Pepper(Capsicum annuum L.)is a typical self-pollinating crop with obvious heterosis in hybrids.Consequently,the use of morphological markers during the pepper seedling stage is crucial for pepper breeding.The color of...Pepper(Capsicum annuum L.)is a typical self-pollinating crop with obvious heterosis in hybrids.Consequently,the use of morphological markers during the pepper seedling stage is crucial for pepper breeding.The color of hypocotyl is widely used as a phenotypic marker in crossing studies of pepper.Pepper accessions generally have purple hypocotyls,which are mainly due to the anthocyanin accumulation in seedlings,and green hypocotyls are rarely observed in pepper.Here we reported the characterization of a green hypocotyl mutant of pepper,Cha1,which was identified from a pepper ethyl methanesulfonate(EMS)mutant library.Fine mapping revealed that the causal gene,CaTTG1,belonging to the WD40 repeat family,controlled the green hypocotyl phenotype of the mutant.Virus-induced gene silencing(VIGS)confirmed that CaTTG1 regulated anthocyanin accumulation.RNA-seq data showed that expression of structural genes CaDFR,CaANS,and CaUF3GT in the anthocyanin biosynthetic pathway was significantly decreased in Cha1 compared to the wild type.Yeast two-hybrid(Y2H)experiments also confirmed that CaTTG1 activated the synthesis of anthocyanin structural genes by forming a MBW complex with CaAN1 and CaGL3.In summary,this study provided a green hypocotyl mutant of pepper,and the Kompetitive Allele Specific PCR(KASP)marker developed based on the mutation site of the underlying gene would be helpful for pepper breeding.展开更多
Super-fine electrohydrodynamic inkjet(SIJ)printing of perovskite nanocrystal(PNC)colloid ink exhibits significant potential in the fabrication of high-resolution color conversion microstructures arrays for fullcolor m...Super-fine electrohydrodynamic inkjet(SIJ)printing of perovskite nanocrystal(PNC)colloid ink exhibits significant potential in the fabrication of high-resolution color conversion microstructures arrays for fullcolor micro-LED displays.However,the impact of solvent on both the printing process and the morphology of SIJ-printed PNC color conversion microstructures remains underexplored.In this study,we prepared samples of CsPbBr3PNC colloid inks in various solvents and investigated the solvent's impact on SIJ printed PNC microstructures.Our findings reveal that the boiling point of the solvent is crucial to the SIJ printing process of PNC colloid inks.Only does the boiling point of the solvent fall in the optimal range,the regular positioned,micron-scaled,conical PNC microstructures can be successfully printed.Below this optimal range,the ink is unable to be ejected from the nozzle;while above this range,irregular positioned microstructures with nanoscale height and coffee-ring-like morphology are produced.Based on these observations,high-resolution color conversion PNC microstructures were effectively prepared using SIJ printing of PNC colloid ink dispersed in dimethylbenzene solvent.展开更多
基金supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2021R1I1A3049788).
文摘In today’s digital era,the rapid evolution of image editing technologies has brought about a significant simplification of image manipulation.Unfortunately,this progress has also given rise to the misuse of manipulated images across various domains.One of the pressing challenges stemming from this advancement is the increasing difficulty in discerning between unaltered and manipulated images.This paper offers a comprehensive survey of existing methodologies for detecting image tampering,shedding light on the diverse approaches employed in the field of contemporary image forensics.The methods used to identify image forgery can be broadly classified into two primary categories:classical machine learning techniques,heavily reliant on manually crafted features,and deep learning methods.Additionally,this paper explores recent developments in image forensics,placing particular emphasis on the detection of counterfeit colorization.Image colorization involves predicting colors for grayscale images,thereby enhancing their visual appeal.The advancements in colorization techniques have reached a level where distinguishing between authentic and forged images with the naked eye has become an exceptionally challenging task.This paper serves as an in-depth exploration of the intricacies of image forensics in the modern age,with a specific focus on the detection of colorization forgery,presenting a comprehensive overview of methodologies in this critical field.
基金supported by the National Natural Science Foundation of China(Grant Nos.12104500 and 82430062)the Key Research and Development Projects of Shaanxi Province(Grant No.2023-YBSF-263),the Shenzhen Engineering Research Centre(Grant No.XMHT20230115004)the Shenzhen Science and Technology Innovation Commission(Grant No.KCXFZ20201221173207022).
文摘Full-color imaging is essential in digital pathology for accurate tissue analysis.Utilizing advanced optical modulation and phase retrieval algorithms,Fourier ptychographic microscopy(FPM)offers a powerful solution for high-throughput digital pathology,combining high resolution,large field of view,and extended depth of field(DOF).However,the full-color capabilities of FPM are hindered by coherent color artifacts and reduced computational efficiency,which significantly limits its practical applications.Color-transferbased FPM(CFPM)has emerged as a potential solution,theoretically reducing both acquisition and reconstruction threefold time.Yet,existing methods fall short of achieving the desired reconstruction speed and colorization quality.In this study,we report a generalized dual-color-space constrained model for FPM colorization.This model provides a mathematical framework for model-based FPM colorization,enabling a closed-form solution without the need for redundant iterative calculations.Our approach,termed generalized CFPM(gCFPM),achieves colorization within seconds for megapixel-scale images,delivering superior colorization quality in terms of both colorfulness and sharpness,along with an extended DOF.Both simulations and experiments demonstrate that gCFPM surpasses state-of-the-art methods across all evaluated criteria.Our work offers a robust and comprehensive workflow for high-throughput full-color pathological imaging using FPM platforms,laying a solid foundation for future advancements in methodology and engineering.
基金Taif University Researchers Supporting Project Number(TURSP-2020/10),Taif University,Taif,Saudi Arabia.
文摘Colorization is the practice of adding appropriate chromatic values to monochrome photographs or videos.A real-valued luminance image can be mapped to a three-dimensional color image.However,it is a severely ill-defined problem and not has a single solution.In this paper,an encoder-decoder Convolutional Neural Network(CNN)model is used for colorizing gray images where the encoder is a Densely Connected Convolutional Network(DenseNet)and the decoder is a conventional CNN.The DenseNet extracts image features from gray images and the conventional CNN outputs a^(*)b^(*)color channels.Due to a large number of desaturated color components compared to saturated color components in the training images,the saturated color components have a strong tendency towards desaturated color components in the predicted a^(*)b^(*)channel.To solve the problems,we rebalance the predicted a^(*)b^(*)color channel by smoothing every subregion individually using the average filter.2 stage k-means clustering technique is applied to divide the subregions.Then we apply Gamma transformation in the entire a^(*)b^(*)channel to saturate the image.We compare our proposed method with several existing methods.From the experimental results,we see that our proposed method has made some notable improvements over the existing methods and color representation of gray-scale images by our proposed method is more plausible to visualize.Additionally,our suggested approach beats other approaches in terms of Peak Signal-to-Noise Ratio(PSNR),Structural Similarity Index Measure(SSIM)and Histogram.
基金Supported by the National Natural Science Foundation of China(No.61073089)the Joint Funds of the National Natural Science,Foundation of China(No.U1304616)the Qinhuangdao Research&Development Program of Science&Technology(No.2012021A044)
文摘A new local cost function is proposed in this paper based on the linear relationship assumption between the values of the color components and the intensity component in each local image window,then a new quadratic objective function is derived from it and the globally optimal chrominance values can be computed by solving a sparse linear system of equations.Through the colorization experiments on various test images,it is confirmed that the colorized images obtained by our proposed method have more vivid colors and sharper boundaries than those obtained by the traditional method.The peak signal to noise ratio(PSNR) of the colorized images and the average estimation error of the chrominance values relative to the original images also show that our proposed method gives more precise estimation than the traditional method.
文摘The effect of single coloring agent Nd(NO3)3 on the crystallization, microstructure and colorization of Li2O-Al2O3-SiO2 (LAS) glass ceramics were investigated by differential thermal analysis (DTA), X-ray diffractometry (XRD) and scanning electron microscopy (SEM). The introduction of little neodymium has no effect on the crystallization manner and the formation of main crystallization phase, but more neodymium will weaken the crystallization of LAS glass. Little neodymium can increase the glossiness of LAS glass ceramic, while more neodymium can color LAS glass with light purple or red purple. The colorability of neodymium for LAS glass ceramic decreases with the increase of crystallization temperature.
文摘The decolorization of Direct Black 22 by Aspergillus ficuum has been studied. It was found that Aspergillus ficuum could effectively decolorize Direct Black 22 especially when grown as pelleted mycelia. Results showed that the media containing Direct Black 22 at 50 mg/L could be decolorized by 98.05% of the initial color in 24 h. The optimum pH and temperature of decolorization are 4.0 and 33 °C respectively. Aeration was quite beneficial to decolorization. Medium composition and the concentration of Direct Black 22 could affect the rate of decolorization. The dye degraded products assayed by UV-visible spectrophotometer and macroscopic observation showed that the decolorization of Direct Black 22 by mycelial pellets includes two important processes: bioadsorption and biodegradation. The degradation experiment agree with the Michaelis-Menten kinetics equation.
基金This work was supported by grants from the National Nat-ural Science Foundation of China(No.61872440,No.62061136007 and No.62102403)the Beijing Municipal Natural Science Foun-dation for Distinguished Young Scholars(No.JQ21013)+1 种基金the Youth Innovation Promotion Association CAS,Royal Society Newton Advanced Fellowship(No.NAF\R2\192151)the Open Project Program of State Key Laboratory of Virtual Reality Technology and Systems,Beihang University(No.VRLAB2022C07).
文摘Image colorization is a classic and important topic in computer graphics,where the aim is to add color to a monochromatic input image to produce a colorful result.In this survey,we present the history of colorization research in chronological order and summarize popular algorithms in this field.Early work on colorization mostly focused on developing techniques to improve the colorization quality.In the last few years,researchers have considered more possibilities such as combining colorization with NLP(natural language processing)and focused more on industrial applications.To better control the color,various types of color control are designed,such as providing reference images or color-scribbles.We have created a taxonomy of the colorization methods according to the input type,divided into grayscale,sketch-based and hybrid.The pros and cons are discussed for each algorithm,and they are compared according to their main characteristics.Finally,we discuss how deep learning,and in particular Generative Adversarial Networks(GANs),has changed this field.
基金supported in part by a CIHE Institutional Development Grant No.IDG200107the National Natural Science Foundation of China under Grant No.61973221the Natural Science Foundation of Guangdong Province of China under Grant Nos.2018A030313381 and 2019A1515011165.
文摘Digital cartoon production requires extensive manual labor to colorize sketches with visually pleasant color composition and color shading.During colorization,the artist usually takes an existing cartoon image as color guidance,particularly when colorizing related characters or an animation sequence.Reference-guided colorization is more intuitive than colorization with other hints,such as color points or scribbles,or text-based hints.Unfortunately,reference-guided colorization is challenging since the style of the colorized image should match the style of the reference image in terms of both global color composition and local color shading.In this paper,we propose a novel learning-based framework which colorizes a sketch based on a color style feature extracted from a reference color image.Our framework contains a color style extractor to extract the color feature from a color image,a colorization network to generate multi-scale output images by combining a sketch and a color feature,and a multi-scale discriminator to improve the reality of the output image.Extensive qualitative and quantitative evaluations show that our method outperforms existing methods,providing both superior visual quality and style reference consistency in the task of reference-based colorization.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos. 61100146 and 61472351, and the Zhejiang Provincial Natural Science Foundation of China under Grant Nos. LY15F020019 and LQ14F020006. Pan was supported by the National Key Technology Research and Development Program of the Ministry of Science and Technology of China under Grant No. 2013BAH24F01. Acknowledgement CVM 2015 anonymous We would like to thank our reviewers for their constructive and helpful comments which definitely improve ttle quality of the paper.
文摘This paper proposes a structure-aware nonlocal energy optimization framework for interactive image colo- rization with sparse scribbles. Our colorization technique propagates colors to both local intensity-continuous regions and remote texture-similar regions without explicit image segmentation. We implement the nonlocal principle by computing k nearest neighbors in the high-dimensional feature space. The feature space contains not only image coordinates and intensities but also statistical texture features obtained with the direction-aligned Gabor wavelet filter. Structure maps are utilized to scale texture features to avoid artifacts along high-contrast boundaries. We show various experimental results and comparisons on image colorization, selective recoloring and decoloring, and progressive color editing to demonstrate the effectiveness of the proposed approach.
基金supported by the National Basic Research 973 Program of China under Grant No.2011CB302201the National Natural Science Foundation of China under Grant Nos.61003094,60931160443+1 种基金funded by Tsinghua National Laboratory for Information Science and Technology(TNList) Cross-Discipline Foundation of Chinasupported by the Innovation Fund of Tsinghua-Tencent Joint Laboratory of China
文摘Colorization of gray-scale images has attracted many attentions for a long time. An important role of image color is the conveyer of emotions (through color themes). The colorization with an undesired color theme is less useful, even it is semantically correct. However this has been rarely considered. Automatic colorization respecting both the semantics and the emotions is undoubtedly a challenge. In this paper~ we propose a complete system for affective image colorization. We only need the user to assist object segmentation along with text labels and an affective word. First, the text labels along with other object characters are jointly used to filter the internet images to give each object a set of semantically correct reference images. Second, we select a set of color themes according to the affective word based on art theories. With these themes, a generic algorithm is used to select the best reference for each object, balancing various requirements. Finally, we propose a hybrid texture synthesis approach for colorization. To the best of our knowledge, it is the first system which is able to efficiently colorize a gray-scale image semantically by an emotionally controllable fashion. Our experiments show the effectiveness of our system, especially the benefit compared with the previous Markov random field (MRF) based method.
基金supported by grants from the National Natural Science Foundation of China(61906184)the Joint Lab of CAS–HK,and the Shanghai Committee of Science and Technology,China(20DZ1100800,21DZ1100100).
文摘Video colorization is a challenging and highly ill-posed problem.Although recent years have witnessed remarkable progress in single image colorization,there is relatively less research effort on video colorization,and existing methods always suffer from severe flickering artifacts(temporal inconsistency)or unsatisfactory colorization.We address this problem from a new perspective,by jointly considering colorization and temporal consistency in a unified framework.Specifically,we propose a novel temporally consistent video colorization(TCVC)framework.TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization.Furthermore,TCVC introduces a self-regularization learning(SRL)scheme to minimize the differences in predictions obtained using different time steps.SRL does not require any ground-truth color videos for training and can further improve temporal consistency.Experiments demonstrate that our method can not only provide visually pleasing colorized video,but also with clearly better temporal consistency than state-of-the-art methods.A video demo is provided at https://www.youtube.com/watch?v=c7dczMs-olE,while code is available at https://github.com/lyh-18/TCVC-Tem porally-Consistent-Video-Colorization.
文摘Grayscale image colorization is an important computer graphics problem with a variety of applications. Recent fully automatic colorization methods have made impressive progress by formulating image colorization as a pixel-wise prediction task and utilizing deep convolutional neural networks. Though tremendous improvements have been made, the result of automatic colorization is still far from perfect. Specifically, there still exist common pitfalls in maintaining color consistency in homogeneous regions as well as precisely distinguishing colors near region boundaries. To tackle these problems, we propose a novel fully automatic colorization pipeline which involves a boundary-guided CRF (conditional random field) and a CNN-based color transform as post-processing steps. In addition, as there usually exist multiple plausible colorization proposals for a single image, automatic evaluation for different colorization methods remains a challenging task. We further introduce two novel automatic evaluation schemes to efficiently assess colorization quality in terms of spatial coherence and localization. Comprehensive experiments demonstrate great quality improvement in results of our proposed colorization method under multiple evaluation metrics.
文摘The automatic colorization of anime line drawings is a challenging problem in production pipelines.Recent advances in deep neural networks have addressed this problem;however,collectingmany images of colorization targets in novel anime work before the colorization process starts leads to chicken-and-egg problems and has become an obstacle to using them in production pipelines.To overcome this obstacle,we propose a new patch-based learning method for few-shot anime-style colorization.The learning method adopts an efficient patch sampling technique with position embedding according to the characteristics of anime line drawings.We also present a continuous learning strategy that continuously updates our colorization model using new samples colorized by human artists.The advantage of our method is that it can learn our colorization model from scratch or pre-trained weights using only a few pre-and post-colorized line drawings that are created by artists in their usual colorization work.Therefore,our method can be easily incorporated within existing production pipelines.We quantitatively demonstrate that our colorizationmethod outperforms state-of-the-art methods.
基金supported by the National Natural Science Foundation of China under Grant Nos.U22B2049 and 62332010.
文摘Video colorization aims to add color to grayscale or monochrome videos.Although existing methods have achieved substantial and noteworthy results in the field of image colorization,video colorization presents more formidable obstacles due to the additional necessity for temporal consistency.Moreover,there is rarely a systematic review of video colorization methods.In this paper,we aim to review existing state-of-the-art video colorization methods.In addition,maintaining spatial-temporal consistency is pivotal to the process of video colorization.To gain deeper insight into the evolution of existing methods in terms of spatial-temporal consistency,we further review video colorization methods from a novel perspective.Video colorization methods can be categorized into four main categories:optical-flow based methods,scribble-based methods,exemplar-based methods,and fully automatic methods.However,optical-flow based methods rely heavily on accurate optical-flow estimation,scribble-based methods require extensive user interaction and modifications,exemplar-based methods face challenges in obtaining suitable reference images,and fully automatic methods often struggle to meet specific colorization requirements.We also discuss the existing challenges and highlight several future research opportunities worth exploring.
基金Supported by the National Natural Science Foundation of China(Grant No.12271210)the Scientific Research Foundation of Jimei University(Grant No.Q202201).
文摘A t-tone coloring of a graph assigns t distinct colors to each vertex with vertices at distance d having fewer than d colors in common.The t-tone chromatic number of a graph is the smallest number of colors used in all t-tone colorings of that graph.In this article,we study t-tone coloring of some finite planar lattices and obtain exact formulas for their t-tone chromatic number.
基金supported by the National Natural Science Foundation of China(No.62266011)the Science and Technology Foundation of Guizhou Province(No.ZK[2022]119).
文摘Video colorization encounters two principal challenges:colorization quality and temporalflicker.Balancing colorization quality and temporal consis-tency is a significant challenge.To address the aforementioned issues,we employ the transformer model in thefield of video colorization and introduce a pioneer-ing video colorization method named VCTR.Initially,to guarantee the quality of video coloring,we employ a pretrained image coloring network to add color to the grayscale video frame.Next,the feature extraction module for VCTR is utilized to perform feature extraction and propagation on the color frames during the coloriza-tion process.Finally,the transformer module is designed to fully leverage local information via the local feature self-attention layer.Additionally,motion infor-mation from bidirectional opticalflow is utilized to identify correlations across video frames for feature fusion,guaranteeing both the coloring effect and tem-poral consistency.The experimental results demonstrate that VCTR outperforms existing methods on two publicly available datasets,namely,DAVIS and Videvo.VCTR attains the top ranking in the long-series dataset,Videvo,based on the CTBI(Colorization and Temporal-Consistency Balance Index).This achievement underscores VCTR’s ability to strike a commendable balance between colorization quality and temporal consistency.
基金supported by the Key Technologies R&D Program of Tianjin(Nos.24YFZCSN00030 and 24YFYSHZ00090)。
文摘Image coloring is an inherently uncertain and multimodal problem.By inputting a grayscale image into a coloring network,visually plausible colored photos can be generated.Conventional methods primarily rely on semantic information for image colorization.These methods still suffer from color contamination and semantic confusion.This is largely due to the limited capacity of convolutional neural networks to learn deep semantic information inherent in images effectively.In this paper,we propose a network structure that addresses these limitations by leveraging multi-level semantic information classification and fusion.Additionally,we introduce a global semantic fusion network to combat the issues of color contamination.The proposed coloring encoder accurately extracts object-level semantic information from images.To further enhance visual plausibility,we employ a self-supervised adversarial training method.We train the network structure on various datasets with varying amounts of data and evaluate its performance using the ImageNet validation set and COCO validation set.Experimental results demonstrate that our proposed algorithm can generate more realistic images compared to previous approaches,showcasing its high generalization ability.
文摘It is of great scientific significance to construct a 3D dynamic structural color with a special color effect based on the microlens array.However,the problems of imperfect mechanisms and poor color quality need to be solved.A method of 3D structural color turning on periodic metasurfaces fabricated by the microlens array and self-assembly technology was proposed in this study.In the experiment,Polydimethylsiloxane(PDMS)flexible film was used as a substrate,and SiO2 microspheres were scraped into grooves of the PDMS film to form 3D photonic crystal structures.By adjusting the number of blade-coated times and microsphere concentrations,high-saturation structural color micropatterns were obtained.These films were then matched with microlens arrays to produce dynamic graphics with iridescent effects.The results showed that by blade-coated two times and SiO2 microsphere concentrations of 50%are the best conditions.This method demonstrates the potential for being widely applied in the anticounterfeiting printing and ultra-high-resolution display.
基金supported by grants from the Special Funds for Construction of Innovative Provinces in Hunan Province(Grant No.2021NK1006)the Science and Technology Innovation Program of Hunan Province(Grant No.2021JC0007)+2 种基金China Agriculture Research System of MOF and MARA(Grant No.CARS-24-A-15)National Natural Science Foundation of China(Grant No.32130097)National Natural Science Foundation of China(Grant No.U19A2028)。
文摘Pepper(Capsicum annuum L.)is a typical self-pollinating crop with obvious heterosis in hybrids.Consequently,the use of morphological markers during the pepper seedling stage is crucial for pepper breeding.The color of hypocotyl is widely used as a phenotypic marker in crossing studies of pepper.Pepper accessions generally have purple hypocotyls,which are mainly due to the anthocyanin accumulation in seedlings,and green hypocotyls are rarely observed in pepper.Here we reported the characterization of a green hypocotyl mutant of pepper,Cha1,which was identified from a pepper ethyl methanesulfonate(EMS)mutant library.Fine mapping revealed that the causal gene,CaTTG1,belonging to the WD40 repeat family,controlled the green hypocotyl phenotype of the mutant.Virus-induced gene silencing(VIGS)confirmed that CaTTG1 regulated anthocyanin accumulation.RNA-seq data showed that expression of structural genes CaDFR,CaANS,and CaUF3GT in the anthocyanin biosynthetic pathway was significantly decreased in Cha1 compared to the wild type.Yeast two-hybrid(Y2H)experiments also confirmed that CaTTG1 activated the synthesis of anthocyanin structural genes by forming a MBW complex with CaAN1 and CaGL3.In summary,this study provided a green hypocotyl mutant of pepper,and the Kompetitive Allele Specific PCR(KASP)marker developed based on the mutation site of the underlying gene would be helpful for pepper breeding.
基金supported by the National Natural Science Foundation of China(No.62374142)Fundamental Research Funds for the Central Universities(Nos.20720220085 and 20720240064)+2 种基金External Cooperation Program of Fujian(No.2022I0004)Major Science and Technology Project of Xiamen in China(No.3502Z20191015)Xiamen Natural Science Foundation Youth Project(No.3502Z202471002)。
文摘Super-fine electrohydrodynamic inkjet(SIJ)printing of perovskite nanocrystal(PNC)colloid ink exhibits significant potential in the fabrication of high-resolution color conversion microstructures arrays for fullcolor micro-LED displays.However,the impact of solvent on both the printing process and the morphology of SIJ-printed PNC color conversion microstructures remains underexplored.In this study,we prepared samples of CsPbBr3PNC colloid inks in various solvents and investigated the solvent's impact on SIJ printed PNC microstructures.Our findings reveal that the boiling point of the solvent is crucial to the SIJ printing process of PNC colloid inks.Only does the boiling point of the solvent fall in the optimal range,the regular positioned,micron-scaled,conical PNC microstructures can be successfully printed.Below this optimal range,the ink is unable to be ejected from the nozzle;while above this range,irregular positioned microstructures with nanoscale height and coffee-ring-like morphology are produced.Based on these observations,high-resolution color conversion PNC microstructures were effectively prepared using SIJ printing of PNC colloid ink dispersed in dimethylbenzene solvent.