New data from FRB’s have provided an exciting new window on the cosmos. For the first time we have both Dispersion Measure (DM) from distant sources and their red-shift. This gives us the opportunity to determine the...New data from FRB’s have provided an exciting new window on the cosmos. For the first time we have both Dispersion Measure (DM) from distant sources and their red-shift. This gives us the opportunity to determine the average electron number density in intergalactic space and thus test New Tired Light predictions. Here, in an alternative cosmology, the universe is static and redshifts are produced by an interaction between photons and the electrons in the intergalactic medium. In a paper published in summer 2006 New Tired Light (NTL) predicted an average electron number density of n = 0.5 m<sup>-3</sup>. In 2016 a paper was published reporting that for the first time the DM of a FRB and the redshift of the host galaxy had been found. Using standard physics this confirmed the electron number density as n = 0.5 m<sup>-3</sup>. The prediction NTL made ten years earlier was proved to be correct. Using this measured electron number density enabled a definitive value of the Hubble constant to be made by New Tired Light and the value is 63 km/s per Mpc which compares well with currently accepted values. Importantly, since in NTL the redshift and dispersion are both due to the electrons in IG space, a relationship between DM’s and redshift can be predicted. NTL predicts that DM and LN(1 + z) will be directly proportional and related by the formula DM = mec/2hr<sub>e</sub>(3.086 × 1022) where me, re are the rest mass and classical radius of the electron, c is the speed of light in a vacuum and h is the plank constant. The numerical term is to change units from pccm<sup>-3</sup> to m<sup>-2</sup>. This reduces to DM = 2380LN(1 + z). Using data from five FRB’s this is tested and a linear relation is seen of the form DM = 1830LN(1 + z). The gradient of the plot from the observed data is within 23% of that predicted by NTL. Recently the Tolman Surface Brightness test has been applied to the HUDF and the results support a static universe whilst the possibility of two differing types of SN Ia whose distribution changes with distances means that tired light models can no longer be ruled out. Using SDF we know the distance to the Atlia galaxy cluster as 1.26 × 10<sup>24</sup> m. With the average electron number density of n = 0.5 m<sup>-3</sup> found from the Dispersion Measures of the FRB’s, from first principles, New Tired Light gives a calculated predicted redshift of 0.0086. This compares well with the value found spectroscopically of 0.0087—a difference of approximately 1%. It is shown that if the energy transferred to a recoiling electron when a UV photon of wavelength λ = 5 × 10<sup>-8</sup> m interacts with it is emitted as a secondary photon that photon will have a wavelength of 2.2 mm— the wavelength at which the CMB curve peaks.展开更多
Dispersion measure in an FRB’s signal is produced by the photons of the radio waves interacting with the free electrons in the IGM. In New Tired Light (NTL), redshifts are produced by the photons of light interacting...Dispersion measure in an FRB’s signal is produced by the photons of the radio waves interacting with the free electrons in the IGM. In New Tired Light (NTL), redshifts are produced by the photons of light interacting with these self-same electrons and so, one would expect a direct relationship between the DM of an FRB and the redshift of the host galaxy. However, workers in this field assume expansion and weight the DM by dividing it by the scale factor (1 + z) to allow for expansion. Once this weighting is removed, it was predicted back in 2016 (when the first FRB was localized) and later presented at a conference and published in the proceedings that, as more FRB’s were localized, a graph of DM versus ln(1 + z) would be a straight line of gradient (mec/2hre) or 7.32 × 1025 m−2 in SI units. The original paper had twenty-four data points but this has risen significantly to sixty-four useable FRB’s and so this corrigendum updates that paper so that all sixty-four are used. The data give a straight-line graph of gradient 7.12 × 1025 m−2, a difference of 3% from (mec/2hre) predicted nine years earlier.展开更多
This paper analyses the center-to-limb problem of the Sun encountered in the solar lines by using for the first time the New Tired Light theory based on photons of light losing energy due to interaction with electrons...This paper analyses the center-to-limb problem of the Sun encountered in the solar lines by using for the first time the New Tired Light theory based on photons of light losing energy due to interaction with electrons. For this scope, a detailed geometrical orbital model on the scale was created in order to trace back all physical characteristics of the Earth orbiting the Sun for three days in the year 1946, when the redshift measurements were taken. This paper suggests that, since the space between the Sun and the Earth consists of a high exponential distribution of electrons, it works out as a medium for the photons of light. Indeed, in the line of sight of a terrestrial observer, the distance between the Sun and the Earth is greater at the limb than in the center, valid for each orbital position. Accordingly, the interactions between photons and electrons cause a slight difference in redshift along the entire solar disk, matching the observational data. An important factor is the definition of objective criteria for the radial velocity component of the solar granules, whose variable values refer, in turn, to existing observational data, crucial for the success of the study. The redshift anomaly on the solar disk has been repeatedly detected in many scientific researches but only a few attempts so far, mostly based on parametrized models, have been done to give a reliable explanation to the measurements.展开更多
The no-evolution, concordance expanding universe cosmology and no-evolution, static universe tired light model are compared against observational data on eight cosmology tests. The no-evolution tired light model is fo...The no-evolution, concordance expanding universe cosmology and no-evolution, static universe tired light model are compared against observational data on eight cosmology tests. The no-evolution tired light model is found to make a superior fit on all tests. Any attempts to introduce evolutionary corrections to improve the concordance cosmology fit on one test often worsen its fit on other tests. Light curve data of high redshift gamma ray bursts and quasars fail to support claims for cosmological time dilation due to expansion. Also, the SCP supernova light curve test results are considered to be flawed by selection effect biases. The big bang theory also has difficulty accounting for redshift quantization, for the multi-megaparsec periodicity seen in the distribution of galaxy superclusters, and for the discovery of galaxies at redshifts as high as <em>z</em> ~11.9. In overview, it is concluded that a static universe cosmology must be sought to explain the origin of the universe. One possible choice is a cosmology that predicts nonconservative tired-light redshifting in intergalactic space, the continuous creation of neutrons in space, the rate of matter creation scaling with both celestial body mass and temperature, galaxies growing progressively in size, and changing their morphology in the manner suggested by Jeans and Hubble.展开更多
The Big Bang model was first proposed in 1931 by Georges Lemaitre. Lemaitre and Hubble discovered a linear correlation between distances to galaxies and their redshifts. The correlation between redshifts and distances...The Big Bang model was first proposed in 1931 by Georges Lemaitre. Lemaitre and Hubble discovered a linear correlation between distances to galaxies and their redshifts. The correlation between redshifts and distances arises in all expanding models of universe as the cosmological redshift is commonly attributed to stretching of wavelengths of photons propagating through the expanding space. Fritz Zwicky suggested that the cosmological redshift could be caused by the interaction of propagating light photons with certain inherent features of the cosmos to lose a fraction of their energy. However, Zwicky did not provide any physical mechanism to support his tired light hypothesis. In this paper, we have developed the mechanism of producing cosmological redshift through head-on collision between light and CMB photons. The process of repeated energy loss of visual photons through n head-on collisions with CMB photons, constitutes a primary mechanism for producing the Cosmological redshift z. While this process results in steady reduction in the energy of visual photons, it also results in continuous increase in the number of photons in the CMB. After a head-on collision with a CMB photon, the incoming light photon, with reduced energy, keeps moving on its original path without any deflection or scattering in any way. After propagation through very large distances in the intergalactic space, all light photons will tend to lose bulk of their energy and fall into the invisible region of the spectrum. Thus, this mechanism of producing cosmological redshift through gradual energy depletion, also explains the Olbers’s paradox.展开更多
Fast Radio Bursts from far away galaxies have travelled through the IGM and provide a tool to study its composition. Presently there are 23 FRB’s whose host galaxies have been identified and the redshift found. This ...Fast Radio Bursts from far away galaxies have travelled through the IGM and provide a tool to study its composition. Presently there are 23 FRB’s whose host galaxies have been identified and the redshift found. This gives us the opportunity to test Dispersion Measure versus redshift predictions made by two models. The Macquart relation for an expanding Universe and the New Tired Light relationship in a static universe. In New Tired Light, redshifts are produced when a photon is absorbed and re-emitted by the electrons in the IGM which recoil on both occasions. Some of the energy of the photon has been transferred to the kinetic energy of the recoiling electron. The photon has less energy, a lower frequency and a longer wavelength. It has been redshifted. Since dispersion is due to an interaction between radio signals and these same electrons one would expect a direct relationship between DM and redshift in the New Tired light model. The relation is DM=(mec/2hre)ln(1+z)and contains no adjustable parameters—just a combination of universal constants related to the electron and photon. Notice that the relation is independent of the electron number density ne since a change in ne affects both the DM and redshift equally. A graph of DM versus ln(1 + z) will be a straight line of gradient (mec/2hre)and, using SI units, substituting for the constants gives 7.318 × 1025 m−2. Using the data from the 23 well localized FRB’s, with the weighting of the DM’s for expansion removed (so that the data corresponds to a static universe), a graph of DM versus ln(1 + z) has a gradient of 6.7 × 1025 m−2—9% below the predicted (mec/2hre). The Macquart relation involves highly processed data and adjustable parameters to allow for “dark energy” and “dark matter” (neither of which has yet been found) and can be reduced to DM = 850z (in units of pc∙cm−3). Using the data from this set of localized FRB’s gives a trendline with gradient 1.10 × 103 pc∙cm−3—almost 30% higher than that predicted in an expanding universe model. The FRB data clearly comes down in favour of a static universe rather than an expanding one. Combining the DM-z relationship for the 23 well localized FRB’s, with the Hubble diagram, drawn using the NED-D compilation of redshift independent extragalactic distances, produces a value of “ne” the mean electron number density of the IGM, of ne=0.48 m−3close to the value ne=0.5 m−3, long since predicted by NTL.展开更多
In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds ...In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds of type la supernovae (SN1a), came to an unexpected conclusion: ever since the universe was about 7 billion years old, its expansion rate has not been decelerating. Instead, the expansion rate has been speeding up. To justify this acceleration, they suggested that the universe does have a mysterious dark energy and they have emerged from oblivion the cosmological constant, positive this time, which is consistent with the image of an inflationary universe. To explain the observed dimming of high-redshift SN1a they have bet essentially on their distance revised upwards. We consider that an accelerated expansion leads right to a “dark energy catastrophe” (i.e., the chasm between the current cosmological vacuum density value of 10 GeV/m<sup>3</sup> and the vacuum energy density proposed by quantum field theory of ~10<sup>122</sup> GeV/m<sup>3</sup>). We suppose rather that the universe knows a slowdown expansion under the positive pressure of a dark energy, otherwise called a variable cosmological constant. The dark luminosity of the latter would be that of a “tired light” which has lost energy with distance. As for the low brilliance of SN1a, it is explained by two physical processes: The first relates to their intrinsic brightness—supposedly do not vary over time—which would depend on the chemical conditions which change with the temporal evolution;the second would concern their apparent luminosity. Besides the serious arguments already known, we strongly propose that their luminosity continually fades by interactions with cosmic magnetic fields, like the earthly PVLAS experiment which loses much more laser photons than expected by crossing a magnetic field. It goes in the sense of a “tired light” which has lost energy with distance, and therefore, a decelerated expansion of the universe. Moreover, we propose the “centrist” principle to complete the hypothesis of the cosmological principle of homogeneity and isotropy considered verified. Without denying the Copernican principle, he is opposed to a “spatial” theoretical construction which accelerates the world towards infinity. The centrist principle gives a “temporal” and privileged vision which tends to demonstrate the deceleration of expansion.展开更多
This paper calculates the redshift of the 2292 MHz radio photon emitted by the Pioneer-6 space probe. The signal crossed the solar corona on the days close to the solar occultation between November and December 1968, ...This paper calculates the redshift of the 2292 MHz radio photon emitted by the Pioneer-6 space probe. The signal crossed the solar corona on the days close to the solar occultation between November and December 1968, the only ones for which scientific data are available, until it reached a terrestrial radio receiver. The specific study is based on a calculated orbital model of the Earth and Pioneer-6 system made on a scale of 1:100,000 by a CAD, on the New Tired Light theory adapted to the geometric and physical configuration of the topic and on a computational method. Removing the Doppler shift contributions of proper and rotational motions, due to the set-up of the receiver, and excluding the recombination factor of neutral hydrogen, which is irrelevant for distances within 1 AU, the calculation of the redshift can be traced back to the interactions between the radio signal and the electrons of the solar corona alone. The latter are contained in a Stroemgren sphere and photo-ionized by solar radiation in the UV and X-ray range. Furthermore, in order to have an interactional redshift contribution, the electrons have to satisfy the Wigner-Crystal Precondition for which their unitary potential energy is greater than their kinetic energy. Otherwise, a Thomson scattering process takes place in which the energy of the radio photon remains unchanged. The comparison between the gravitational redshift together with the interactional redshift detected from this study methodology and the total redshift obtained from other scientific studies shows a similarity between the curves, including the observational data, both in terms of values, trend of the graphs and single punctual variations.展开更多
文摘New data from FRB’s have provided an exciting new window on the cosmos. For the first time we have both Dispersion Measure (DM) from distant sources and their red-shift. This gives us the opportunity to determine the average electron number density in intergalactic space and thus test New Tired Light predictions. Here, in an alternative cosmology, the universe is static and redshifts are produced by an interaction between photons and the electrons in the intergalactic medium. In a paper published in summer 2006 New Tired Light (NTL) predicted an average electron number density of n = 0.5 m<sup>-3</sup>. In 2016 a paper was published reporting that for the first time the DM of a FRB and the redshift of the host galaxy had been found. Using standard physics this confirmed the electron number density as n = 0.5 m<sup>-3</sup>. The prediction NTL made ten years earlier was proved to be correct. Using this measured electron number density enabled a definitive value of the Hubble constant to be made by New Tired Light and the value is 63 km/s per Mpc which compares well with currently accepted values. Importantly, since in NTL the redshift and dispersion are both due to the electrons in IG space, a relationship between DM’s and redshift can be predicted. NTL predicts that DM and LN(1 + z) will be directly proportional and related by the formula DM = mec/2hr<sub>e</sub>(3.086 × 1022) where me, re are the rest mass and classical radius of the electron, c is the speed of light in a vacuum and h is the plank constant. The numerical term is to change units from pccm<sup>-3</sup> to m<sup>-2</sup>. This reduces to DM = 2380LN(1 + z). Using data from five FRB’s this is tested and a linear relation is seen of the form DM = 1830LN(1 + z). The gradient of the plot from the observed data is within 23% of that predicted by NTL. Recently the Tolman Surface Brightness test has been applied to the HUDF and the results support a static universe whilst the possibility of two differing types of SN Ia whose distribution changes with distances means that tired light models can no longer be ruled out. Using SDF we know the distance to the Atlia galaxy cluster as 1.26 × 10<sup>24</sup> m. With the average electron number density of n = 0.5 m<sup>-3</sup> found from the Dispersion Measures of the FRB’s, from first principles, New Tired Light gives a calculated predicted redshift of 0.0086. This compares well with the value found spectroscopically of 0.0087—a difference of approximately 1%. It is shown that if the energy transferred to a recoiling electron when a UV photon of wavelength λ = 5 × 10<sup>-8</sup> m interacts with it is emitted as a secondary photon that photon will have a wavelength of 2.2 mm— the wavelength at which the CMB curve peaks.
文摘Dispersion measure in an FRB’s signal is produced by the photons of the radio waves interacting with the free electrons in the IGM. In New Tired Light (NTL), redshifts are produced by the photons of light interacting with these self-same electrons and so, one would expect a direct relationship between the DM of an FRB and the redshift of the host galaxy. However, workers in this field assume expansion and weight the DM by dividing it by the scale factor (1 + z) to allow for expansion. Once this weighting is removed, it was predicted back in 2016 (when the first FRB was localized) and later presented at a conference and published in the proceedings that, as more FRB’s were localized, a graph of DM versus ln(1 + z) would be a straight line of gradient (mec/2hre) or 7.32 × 1025 m−2 in SI units. The original paper had twenty-four data points but this has risen significantly to sixty-four useable FRB’s and so this corrigendum updates that paper so that all sixty-four are used. The data give a straight-line graph of gradient 7.12 × 1025 m−2, a difference of 3% from (mec/2hre) predicted nine years earlier.
文摘This paper analyses the center-to-limb problem of the Sun encountered in the solar lines by using for the first time the New Tired Light theory based on photons of light losing energy due to interaction with electrons. For this scope, a detailed geometrical orbital model on the scale was created in order to trace back all physical characteristics of the Earth orbiting the Sun for three days in the year 1946, when the redshift measurements were taken. This paper suggests that, since the space between the Sun and the Earth consists of a high exponential distribution of electrons, it works out as a medium for the photons of light. Indeed, in the line of sight of a terrestrial observer, the distance between the Sun and the Earth is greater at the limb than in the center, valid for each orbital position. Accordingly, the interactions between photons and electrons cause a slight difference in redshift along the entire solar disk, matching the observational data. An important factor is the definition of objective criteria for the radial velocity component of the solar granules, whose variable values refer, in turn, to existing observational data, crucial for the success of the study. The redshift anomaly on the solar disk has been repeatedly detected in many scientific researches but only a few attempts so far, mostly based on parametrized models, have been done to give a reliable explanation to the measurements.
文摘The no-evolution, concordance expanding universe cosmology and no-evolution, static universe tired light model are compared against observational data on eight cosmology tests. The no-evolution tired light model is found to make a superior fit on all tests. Any attempts to introduce evolutionary corrections to improve the concordance cosmology fit on one test often worsen its fit on other tests. Light curve data of high redshift gamma ray bursts and quasars fail to support claims for cosmological time dilation due to expansion. Also, the SCP supernova light curve test results are considered to be flawed by selection effect biases. The big bang theory also has difficulty accounting for redshift quantization, for the multi-megaparsec periodicity seen in the distribution of galaxy superclusters, and for the discovery of galaxies at redshifts as high as <em>z</em> ~11.9. In overview, it is concluded that a static universe cosmology must be sought to explain the origin of the universe. One possible choice is a cosmology that predicts nonconservative tired-light redshifting in intergalactic space, the continuous creation of neutrons in space, the rate of matter creation scaling with both celestial body mass and temperature, galaxies growing progressively in size, and changing their morphology in the manner suggested by Jeans and Hubble.
文摘The Big Bang model was first proposed in 1931 by Georges Lemaitre. Lemaitre and Hubble discovered a linear correlation between distances to galaxies and their redshifts. The correlation between redshifts and distances arises in all expanding models of universe as the cosmological redshift is commonly attributed to stretching of wavelengths of photons propagating through the expanding space. Fritz Zwicky suggested that the cosmological redshift could be caused by the interaction of propagating light photons with certain inherent features of the cosmos to lose a fraction of their energy. However, Zwicky did not provide any physical mechanism to support his tired light hypothesis. In this paper, we have developed the mechanism of producing cosmological redshift through head-on collision between light and CMB photons. The process of repeated energy loss of visual photons through n head-on collisions with CMB photons, constitutes a primary mechanism for producing the Cosmological redshift z. While this process results in steady reduction in the energy of visual photons, it also results in continuous increase in the number of photons in the CMB. After a head-on collision with a CMB photon, the incoming light photon, with reduced energy, keeps moving on its original path without any deflection or scattering in any way. After propagation through very large distances in the intergalactic space, all light photons will tend to lose bulk of their energy and fall into the invisible region of the spectrum. Thus, this mechanism of producing cosmological redshift through gradual energy depletion, also explains the Olbers’s paradox.
文摘Fast Radio Bursts from far away galaxies have travelled through the IGM and provide a tool to study its composition. Presently there are 23 FRB’s whose host galaxies have been identified and the redshift found. This gives us the opportunity to test Dispersion Measure versus redshift predictions made by two models. The Macquart relation for an expanding Universe and the New Tired Light relationship in a static universe. In New Tired Light, redshifts are produced when a photon is absorbed and re-emitted by the electrons in the IGM which recoil on both occasions. Some of the energy of the photon has been transferred to the kinetic energy of the recoiling electron. The photon has less energy, a lower frequency and a longer wavelength. It has been redshifted. Since dispersion is due to an interaction between radio signals and these same electrons one would expect a direct relationship between DM and redshift in the New Tired light model. The relation is DM=(mec/2hre)ln(1+z)and contains no adjustable parameters—just a combination of universal constants related to the electron and photon. Notice that the relation is independent of the electron number density ne since a change in ne affects both the DM and redshift equally. A graph of DM versus ln(1 + z) will be a straight line of gradient (mec/2hre)and, using SI units, substituting for the constants gives 7.318 × 1025 m−2. Using the data from the 23 well localized FRB’s, with the weighting of the DM’s for expansion removed (so that the data corresponds to a static universe), a graph of DM versus ln(1 + z) has a gradient of 6.7 × 1025 m−2—9% below the predicted (mec/2hre). The Macquart relation involves highly processed data and adjustable parameters to allow for “dark energy” and “dark matter” (neither of which has yet been found) and can be reduced to DM = 850z (in units of pc∙cm−3). Using the data from this set of localized FRB’s gives a trendline with gradient 1.10 × 103 pc∙cm−3—almost 30% higher than that predicted in an expanding universe model. The FRB data clearly comes down in favour of a static universe rather than an expanding one. Combining the DM-z relationship for the 23 well localized FRB’s, with the Hubble diagram, drawn using the NED-D compilation of redshift independent extragalactic distances, produces a value of “ne” the mean electron number density of the IGM, of ne=0.48 m−3close to the value ne=0.5 m−3, long since predicted by NTL.
文摘In 1998, two groups of astronomers, one led by Saul Perlmutter and the other by Brian Schmidt, set out to determine the deceleration—and hence the total mass/energy—of the universe by measuring the recession speeds of type la supernovae (SN1a), came to an unexpected conclusion: ever since the universe was about 7 billion years old, its expansion rate has not been decelerating. Instead, the expansion rate has been speeding up. To justify this acceleration, they suggested that the universe does have a mysterious dark energy and they have emerged from oblivion the cosmological constant, positive this time, which is consistent with the image of an inflationary universe. To explain the observed dimming of high-redshift SN1a they have bet essentially on their distance revised upwards. We consider that an accelerated expansion leads right to a “dark energy catastrophe” (i.e., the chasm between the current cosmological vacuum density value of 10 GeV/m<sup>3</sup> and the vacuum energy density proposed by quantum field theory of ~10<sup>122</sup> GeV/m<sup>3</sup>). We suppose rather that the universe knows a slowdown expansion under the positive pressure of a dark energy, otherwise called a variable cosmological constant. The dark luminosity of the latter would be that of a “tired light” which has lost energy with distance. As for the low brilliance of SN1a, it is explained by two physical processes: The first relates to their intrinsic brightness—supposedly do not vary over time—which would depend on the chemical conditions which change with the temporal evolution;the second would concern their apparent luminosity. Besides the serious arguments already known, we strongly propose that their luminosity continually fades by interactions with cosmic magnetic fields, like the earthly PVLAS experiment which loses much more laser photons than expected by crossing a magnetic field. It goes in the sense of a “tired light” which has lost energy with distance, and therefore, a decelerated expansion of the universe. Moreover, we propose the “centrist” principle to complete the hypothesis of the cosmological principle of homogeneity and isotropy considered verified. Without denying the Copernican principle, he is opposed to a “spatial” theoretical construction which accelerates the world towards infinity. The centrist principle gives a “temporal” and privileged vision which tends to demonstrate the deceleration of expansion.
文摘This paper calculates the redshift of the 2292 MHz radio photon emitted by the Pioneer-6 space probe. The signal crossed the solar corona on the days close to the solar occultation between November and December 1968, the only ones for which scientific data are available, until it reached a terrestrial radio receiver. The specific study is based on a calculated orbital model of the Earth and Pioneer-6 system made on a scale of 1:100,000 by a CAD, on the New Tired Light theory adapted to the geometric and physical configuration of the topic and on a computational method. Removing the Doppler shift contributions of proper and rotational motions, due to the set-up of the receiver, and excluding the recombination factor of neutral hydrogen, which is irrelevant for distances within 1 AU, the calculation of the redshift can be traced back to the interactions between the radio signal and the electrons of the solar corona alone. The latter are contained in a Stroemgren sphere and photo-ionized by solar radiation in the UV and X-ray range. Furthermore, in order to have an interactional redshift contribution, the electrons have to satisfy the Wigner-Crystal Precondition for which their unitary potential energy is greater than their kinetic energy. Otherwise, a Thomson scattering process takes place in which the energy of the radio photon remains unchanged. The comparison between the gravitational redshift together with the interactional redshift detected from this study methodology and the total redshift obtained from other scientific studies shows a similarity between the curves, including the observational data, both in terms of values, trend of the graphs and single punctual variations.