Name: Rob
Biography
Name: filia_evae

Blog Roll

Intelligent Design

Is "Controlled Remote Viewing" Science?

Recently, a blogger wrote that Hitler had a "psychic" spy unit that was very successful, and was imitated by the US. As a graduate of the US program, this blogger is willing to teach the scientific method that apparently is used by every intelligence agency in the world.  In an unrelated blog, several Chinese journalists have been attacked for publically debunking "pseudo-science" in China, including psychics who predict earthquakes.

What is going on with this revival of "psychic" arts?

Well, to be more precise, it is only a revival in the West, where Materialism had basically rejected all supernatural claims as fakery. The East, despite a big dose of Communist materialism, apparently remains a sucker for such claims, which often are disguised as "scientific". You might note the tone of the "Controlled Remote Viewing" promoter is also "scientific". So psychics have repackaged their product under the guise of anti-supernatural materialism, which is why the Chinese journalists refer to it as "pseudo-science".

How then does one tell the difference between pseudo-science and science?

I am reminded of GK Chesterton's mystery stories featuring Father Brown, where a character witnessing a strange murder insists it was done by ghosts. Father Brown admonishes the character that it is precisely because he is familiar with ghosts that he knows this murder was done by humans. In the same way, if "pseudo-science" is identified only by authority, only by others making the negative claim "this isn't science", then there are no markers for identification. This despite Langmuir's attempt at identifying marginal science by identifying warning flags. Rather, if we believe in psychic powers, if we know what paranormal is, then we can readily distinguish between the two sorts of science. Father Brown's wisdom does not come from keen logic such as found in Sherlock Holmes and skeptics, but from a wider knowledge than admitted by Sherlock Holmes. It is not disbelief in psychic phenomena that makes us good debunkers, but experience in psychic phenomena that make us good debunkers.

That is, I think Christians are obligated to believe in witchcraft and the occult, and also obligated to abstain from it. Sort of like the comment that "even the demons believe, and shudder". Believing is not the same thing as advocating. We have institutionalized the abstinence into a disbelief, which while very useful culturally, is very dangerous personally. During my abortive attempt to become a missionary, I was given advice at training camp that we should take very seriously cultural taboos and local witchcraft, because many missionaries have wrecked upon those rocks when they denied the existence of demons. Several eyewitness accounts were then given.

So how is it that the Western culture has done so well in the past 300 years by denigrating and denying the supernatural? How is that materialism has been so successful, when it clearly gets something wrong about the nature of reality?

I think the answer is in that humorous piece of self-promotion on "Controlled Remote Viewing" that started this blog. The author comments how psychics are scientifically calibrated, so that they know they are "80% correct in colors, but only 70% on shapes" etc. (BTW, if the percentages were anywhere close to what the author promoted, they should be a billionaire now from playing Las Vegas, the Lottery, the stock market, etc. They wouldn't be teaching seminars for piddling salaries, telling people how great it is.) So what does it mean to have partial knowledge, but not know which part?

The answer might be found in another blog on peer review where a couple of medical researchers made a model of peer review, in which there were 5 kinds of reviewers: good, random, "rational" (self-serving), nasty, and kind. Only the first sort separated the reviewed papers properly, the 2nd was arbitrary, the 3rd was evil, the 4th always rejects papers, and the 5th always accepts them. They discovered that the greatest degradation on published papers came from the 2nd and 3rd sorts of people. As little as 10% of the reviewers of the evil or random sort, made the quality of published papers indistinguishable from chance.

Now in actuality, there is something worse than chance, and that is when only wrong papers get published.

So this may explain why Materialism in the 20th century, with its denial of the existence of the occult, is actually better than a world with "Controlled Remote Viewing". That is, if we have only 80% success, the 20% failure rate poisons the result so badly, that the outcome is worse than if we just went with ignorance. You know the old adage, "To err is human, to really screw things up takes a computer" and replace the last word with "psychic". When we mess with things that have the potential to be evil, we can be in a worse condition than if we went in with ignorance.

To me, this is why Moses gave the command that any prophet (=psychic, controlled remote viewer, palm reader, etc) who didn't have 100% accuracy was to be stoned. Why? Because either they were ignorant or evil.

Well why wouldn't 90% accuracy be good enough? Isn't anything greater than 50% an improvement?

Because in a highly technology dependent world, where millions of people rely on say, internet or GPS or uninterrupted power, 90% or even 99% isn't good enough. Materialism, for its faults, can deliver 5 nines --99.999%--in reliability. So "Controlled Remote Viewing" is really for fault-tolerant villages, not for modern civilization.

So if someone tells you that Materialism is an unmitigated evil, remind them of this advantage.
Email It | Print It | Trackbacks (0) | Flag as Offensive

Comets and the Origin of Life: Part 4

My earlier triple post on "Voom!" has now been composed as a potential chapter in a Springer Verlag book on extremophiles and the origin of life. It is a 25 page chapter, whose draft can be found in PDF format here. But for those who want to read it in smaller chunks, I've broken it down into 4 sections and posted it on this blog:

The Cometary Hydrosphere (and the Origin-of-Life)

5. Discussion and Conclusions

The OOL problem arose in the 19th century as materialism replaced theism as the metaphysics of science. (Some might argue that science is defined by its materialist metaphysics, and therefore deny the existence of science until the Enlightenment, but this narrow definition does disservice to the contributions of Aristotle, Archimedes, and the countless giants" on whose shoulders Newton stood.) In this more restricted scientific metaphysics, Aristotle's material causes trump his final causes, and life is to be described by how" rather than why." With the discovery by Pasteur that spontaneous generation is highly unlikely, and with the 20th century advances in biochemistry that made spontaneous generation impossible in a finite universe ({Meyer09}), the OOL problem crystallized all the metaphysical objections to materialism that had been raised by Aristotle and subsequent generations of philosophers.

Elsewhere in 20th century science, materialism posed less of an impediment, and progress was made in information theory, astronomy and cosmology that led to several important discoveries of conservation laws. The late 1800‘s saw the development of thermodynamics, and its twin concepts of energy conservation and entropy growth, despite neither being a material property envisioned by Democritus. Thermodynamics was brought back into the fold of materialism by Boltzmann, who gave it a particle (statistical mechanics) interpretation, and along the way defined entropy as a probabilistic ordering of these particles. A half-century later, Shannon laid the foundation of the computer revolution by showing how the flip side of entropy is information, and by showing how machines can process that information digitally.

The implications from Shannon's new field of Information Theory rippled outward into all of the sciences, especially physics. Quantum mechanics began to report experiments whose outcome depended purely on information ({KimYH2000}). Hawking began to consider the effects of entropy and energy on astrophysics, concluding that entropy (and therefore information) is conserved even as black holes devour the matter of materialism ({Hawking05,Susskind08}). Thus in some paradoxical sense, the immaterial information is more permanent than the material matter.

This progress of materialism toward immaterialism is no more evident than in the late career of (Nobel deserving) physicist John Wheeler, who described his life as composed of three phases: Everything is Particles; Everything is Fields, and Everything is Information." His memorable aphorism to describe this final phase was It from Bit"--existence comes from information ({Wheeler99}). To grasp how revolutionary is this statement, compare Wheeler's aphorism to the materialist-inspired existentialist aphorism from the mid-20th century--existence precedes essence" ({Sartre43}). For the first time since Aquinas, scientists are now seriously considering not just the inadequacy of materialism, but the necessity of immaterialism.

This shift may explain the publication this year, of a curious paper by Verlinde, in which he argues that conservation of energy and conservation of information, with some mathematical machinery of 4-D spacetime, can not only reproduce Einstein's general theory of relativity, but also Newton's laws of motion ({Verlinde10}). That is, the materialist assumptions of point particles travelling through the void which were so ably quantified by Newton's calculus, have now been derived, not assumed, from conservation laws of energy and information. Two completely immaterial concepts have been combined so as to derive the material. Materialism is not the basis of science, but a corollary of science.

The information, including that in the Fourier realm which we argue is necessary to explain the origin of life, is now thought by many to be a permanent feature of the universe, which from a physics standpoint, means a contingent feature of the Big Bang. The Anthropic Principle which paled at the prospect of a finely tuned explosion to one part in 10^60 such that a grain of sand more or less would have made the universe devoid of life, must now contend with contingent information of far greater magnitudes.

This conservation of information from the Big Bang is often misunderstood as front-loading", or as the British Deist William Paley described it in 1801, as the winding up of a watch ({Paley1809}). The description would be accurate if it were applied to four-dimensional spacetime, but if time is treated as a separate dimension, then the metaphor is inaccurate. That is, the wound-up watch in 3D has a spatial arrangement of springs and levers, that are then deterministically evolved in time; a boundary condition in space that sets x0 and v0 and then follows F=ma in time. Free choice seems to be missing from the equation, and likewise, entropy appears to be growing as the spring gets hot, destroying information. However, the watch in four-dimensional spacetime not only has a boundary condition in space, but a boundary condition in time. Thus information is continually propagating to the watch from that temporal boundary condition as it unwinds, just as information is imparted to the watch from its spatial boundary condition as it unwinds. That information may include, for example, instructions to rewind the watch. These information flows in spacetime mean that the system is not closed" to outside influence, which would then lead to entropic information loss, but capable of becoming more complex.

Applying this to our OOL problem, the appearance of life is not explained as an internal law of complexification (aka, vitalism), or an internal production of information (aka materialist evolution), but as a consequence of external information flow (e.g., comets), bringing information from the 4D spacetime boundary condition that accompanied the Big Bang ({Sheldon08}).

What would this OOL scenario look like to an observer within the system? Making an analogy to Paley's watch, we can imagine labelling all the atoms of the watch, and then running the movie back one year to see how that watch came to be. We would see tagged atoms of copper and tin and zinc coming from ores, being purified and concentrated, melted and mixed and shaped and cut and polished. Then from locations all over the Earth, these components would arrive and concentrate into subassemblies, which further transport would bring to the watch factory and suddenly they would all assemble and the watch would begin to function. At each moment of the movie, information is being added in the form of concentrating, shaping and structuring. At no point in the movie would there likely be an entropic or information destroying event. Instead highly improbable events would be forcing the system toward a completed watch.

In the same way, the movie of OOL would show a highly diffuse and distributed information system, that kept getting concentrated, altered and structured. Not only would OOL involve more than a warm pond on the Earth, it would likely involve more than all the warm ponds in the Solar System and galaxy. As the universe expands and as the galaxies contract, the information of OOL would be moving from the 4D boundary of the Big Bang toward the middle, toward a spacetime volume somewhere on a comet, until a living organism suddenly appears. At this point, information is being created rather than conserved, and this model ceases to apply.

In conclusion, we have attempted to show that the OOL problem runs aground on the metaphysical shoals of materialism. Information theory provides a way forward, but must be expanded to include non-local, or Fourier space information to accommodate the vast amounts of information encoded by life. This required capacity, when generalized to Einstein's spacetime, is claimed to be a conserved quantity of the universe that must also incorporate time making information flow a necessary consequence of information capacity. When combined with the conservation laws, this information flow is from the 4D boundary conditions of the Big Bang inward, toward the volume that includes the Earth. Materially, we find that comets have all the properties to mediate this information flow, and are therefore the physical realization of this mathematical necessity. We provide some weak justification for singling out comets for this monumental task, incidentally suggesting that they may also provide the solution to the missing dark matter" problem.

When we examine the solution we have derived, we find that it has led several prominent physicists to propose the priority of information, and the derivative nature of materialism. Thus the OOL problem turns out to be solved by turning materialism on its head. Rather than finding life to be a difficult accomplishment for a materialist, we find instead that materialism is a trivial accomplishment for life.

Bibliography
Abrams and Lloyd(1999)
Abrams, D. S., and S. Lloyd, Quantum Algorithm Providing Exponential Speed Increase for Finding Eigenvalues and Eigenvectors, Physical Review Letters, 83, 5162-5165, doi:10.1103/PhysRevLett.83.5162, 1999.
Angus et al.(2007)Angus, Shan, Zhao, and Famaey
Angus, G. W., H. Y. Shan, H. S. Zhao, and B. Famaey, On the Proof of Dark Matter, the Law of Gravity, and the Mass of Neutrinos, Astrophys. J. Lett., 654, L13-L16, doi:10.1086/510738, 2007.
Axe(2004)
Axe, D. D., Estimating the prevalence of protein sequences adopting functional enzyme folds, Journal of Molecular Biology, 341, 1295-1315, 2004.
Barrick et al.(2009)Barrick, Yu, Yoon, Jeong, Oh, Schneider, Lenski, and Kim
Barrick, J. E., D. S. Yu, S. H. Yoon, H. Jeong, T. K. Oh, D. Schneider, R. E. Lenski, and J. F. Kim, Genome evolution and adaptation in a long-term experiment with escherichia coli, Nature, 461, 1243-1247, 2009.
Carter(1974)
Carter, B., Large number coincidences and the anthropic principle in cosmology, in IAU Symposium 63: Confrontation of Cosmological Theories with Observational Data, p. 291–298, Reidel, Dordrecht, 1974.
Clowe et al.(2006)Clowe, Bradac, Gonzalez, Markevitch, Randall, Jones, and Zaritsky
Clowe, D., M. Bradac, A. H. Gonzalez, M. Markevitch, S. W. Randall, C. Jones, and D. Zaritsky, A Direct Empirical Proof of the Existence of Dark Matter, Astrophys. J. Lett., 648, L109-L113, doi:10.1086/508162, 2006.
Coates et al.(2010)Coates, Jones, Lewis, Wellbrock, Young, Crary, Johnson, Cassidy, and Hill
Coates, A. J., G. H. Jones, G. R. Lewis, A. Wellbrock, D. T. Young, F. J. Crary, R. E. Johnson, T. A. Cassidy, and T. W. Hill, Negative ions in the Enceladus plume, 206, 618-622, doi:10.1016/j.icarus.2009.07.013, 2010.
Cover and Thomas(1991)
Cover, T. M., and J. A. Thomas, Elements of Information Theory, John Wiley and Sons Inc, New York, NY, 1991.
Darwin(1859)
Darwin, C., On The Origin of Species by Means of Natural Selection, or The Preservation of Favoured Races in the Struggle for Life, John Murray, London, 1859.
Darwin(1887)
Darwin, F., The life and letters of Charles Darwin, including an autobiographical chapter, vol. 3, 18 pp., John Murray, London, 1887.
Davies(2000)
Davies, P., The Fifth Miracle: The Search for the Origin and Meaning of Life, Simon & Schuster, New York, 2000.
Dembski(1998)
Dembski, W., The Design Inference: Eliminating Chance Through Small Probabilities, Cambridge: Cambridge University Press, 1998.
Dembski(2002)
Dembski, W. A., No free lunch: Why specified complexity cannot be purchased without intelligence, Rowman and Littlefield Pub. Inc., Lanham, MD, 2002.
Dembski and Marks, II(2010)
Dembski, W. A., and R. J. Marks, II, The search for a search: Measuring the information cost of higher level search, Journal of Advanced Computational Intelligence and Intelligent Informatics, 14(5), 475-486, 2010.
Farley(1974)
Farley, J., The Spontaneous Generation Controversy from Descartes to Oparin, Johns Hopkins University Press, Baltimore MD, 1974.
Frank(1990)
Frank, L. A., The Big Splash: A Scientific Discovery That Revolutionizes the Way We View the Origin of Life, the Water We Drink, the Death of the Dinosaurs,..., Birch Lane Press, New York, 1990.
Frank et al.(1986)Frank, Sigwarth, and Craven
Frank, L. A., J. B. Sigwarth, and J. D. Craven, On the influx of small comets into the earth's upper atmosphere. I - Observations. II - Interpretation, Geophys. Res. Lett., 13, 303-310, doi:10.1029/GL013i004p00303, 1986.
Gallo and Feng(2010)
Gallo, C. F., and J. Q. Feng, Galactic rotation discribed by a thin-disk gravitational model without dark matter, Journal of Cosmology, 6, 1373-1380, 2010.
Gibson et al.(2010)Gibson, Wickramasinghe, and Schild
Gibson, C. H., N. C. Wickramasinghe, and R. E. Schild, First life in primordial-planet oceans: the biological big bang, ArXiv e-prints, 2010.
Gibson et al.(2010)
Gibson, D. G., et al., Creation of a bacterial cell controlled by a chemically synthesized genome, Science, 329(5987), 52-6, 2010.
Guth(2007)
Guth, A. H., Eternal inflation and its implications, Journal of Physics A Mathematical General, 40, 6811-6826, doi:10.1088/1751-8113/40/25/S25, 2007.
Hawking(2005)
Hawking, S. W., Information loss in black holes, PhysRevD, 72(8), 084,013-+, doi:10.1103/PhysRevD.72.084013, 2005.
Herculano-Houzel(2009)
Herculano-Houzel, S., The human brain in numbers: a linearly scaled-up primate brain, Front Hum Neurosci., 3, 31, 2009.
Hoover(2008)
Hoover, R. B., Comets, carbonaceous meteorites, and the origin of the biosphere, in Biosphere Origin and Evolution, edited by N. Dobretsov, N. Kolchanov, A. Rozanov, and G. Zavarzin, pp. 55-68, Springer US, 2008.
Hoover et al.(2004)Hoover, Pikuta, Wickramasinghe, Wallis, and Sheldon
Hoover, R. B., E. V. Pikuta, N. C. Wickramasinghe, M. K. Wallis, and R. B. Sheldon, Astrobiology of comets, in Instruments, Methods, and Missions for Astrobiology VII, edited by R. B. Hoover, G. V. Levin, and A. Y. Rozanov, pp. 93-106, Proc. of SPIE Vol 5555, Bellingham, WA, 2004.
Hoyle(1999)
Hoyle, F., The Mathematics of Evolution, Acorn Enterprises, Memphis, TN, 1999.
Kim et al.(2000)Kim, Yu, Kulik, Shih, and Scully
Kim, Y.-H., R. Yu, S. P. Kulik, Y. Shih, and M. O. Scully, Delayed choice' quantum eraser, Phys. Rev. Lett., 84(1), 1-5, doi:10.1103/PhysRevLett.84.1, 2000.
Komatsu et al.(2010)
Komatsu, E., et al., Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation, ArXiv e-prints, 2010.
Lee et al.(2010)Lee, Jonathan, and Ziman
Lee, R., P. Jonathan, and P. Ziman, Pictish symbols revealed as a written language through application of Shannon entropy, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science, 466(2121), 2545-2560, doi:10.1098/rspa.2010.0041, 2010.
Levin and Straat(1976)
Levin, G. V., and P. A. Straat, Viking labeled release biology experiment - interim results, Science, 194, 1322-1229, 1976.
Lewis(1964)
Lewis, C. S., The Discarded Image: An Introduction to Medieval and Renaissance Literature, Cambridge University Press, Cambridge, 1964.
Linde and Vanchurin(2010)
Linde, A., and V. Vanchurin, How many universes are in the multiverse?, PhysRevD, 81(8), 083,525-+, doi:10.1103/PhysRevD.81.083525, 2010.
Lloyd(2002)
Lloyd, S., Computational capacity of the universe, Phys. Rev. Lett., 88(23), 237,901, doi:10.1103/PhysRevLett.88.237901, 2002.
Lovelock and Margulis(1974)
Lovelock, J. E., and L. Margulis, Atmospheric homeostasis by and for the biosphere: The gaia hypothesis, Tellus, 26, 2-+, 1974.
Lucretius Carus(1921)
Lucretius Carus, T., Of the Nature of Things (De Rerum Natura), E. P. Dutton and Co. Inc., New York, 1921.
Meyer(2009)
Meyer, S. C., Signature in the Cell: DNA and the Evidence for Intelligent Design, HarperCollins Pub., New York, NY, 2009.
Mojzsis et al.(2003)Mojzsis, Harrison, and Manning
Mojzsis, S. J., T. M. Harrison, and C. E. Manning, The oldest known sediments on Earth: Implications for exobiology, Geochimica et Cosmochimica Acta Supplement, 67, 300-+, 2003.
NASA/Jet Propulsion Laboratory(2010)
NASA/Jet Propulsion Laboratory, Life on Titan? New Clues to What's Consuming Hydrogen, Acetylene on Saturn's Moon, ScienceDaily, 2010.
Oak Ridge National Laboratory(2009)
Oak Ridge National Laboratory, Oak Ridge 'Jaguar' Supercomputer Is World's Fastest, ScienceDaily, 2009.
Paley(1809)
Paley, W., Natural Theology: or, Evidences of the Existence and Attributes of the Deity, 12 ed., J. Faulder, London, 1809.
Pasteur(1861)
Pasteur, L., Mémoire sur les corpuscules organisés qui existent dans l'atmosphère. Examen de la doctrine des générations spontanées, Ann. sci. naturelles (partie zoologique) (Sér. 4), 16, 5-98, 1861.
Peretó et al.(2009)Peretó, Bada, and Lazcano
Peretó, J., J. Bada, and A. Lazcano, Charles Darwin and the origin of life, Origins of Life and Evolution of Biospheres, 39, 395-406, 10.1007/s11084-009-9172-7, 2009.
Pitts(2009)
Pitts, J. B., Gauge-Invariant Localization of Infinitely Many Gravitational Energies from all Possible Auxiliary Structures, Or, Why Pseudotensors are Okay, ArXiv e-prints, 2009.
Polanyi(1968)
Polanyi, M., Life's Irreducible Structure, Science, 160, 1308-1312, 1968.
Randall et al.(2008)Randall, Markevitch, Clowe, Gonzalez, and Bradac
Randall, S. W., M. Markevitch, D. Clowe, A. H. Gonzalez, and M. Bradac, Constraints on the Self-Interaction Cross Section of Dark Matter from Numerical Simulations of the Merging Galaxy Cluster 1E 0657-56, Astrophys. J., 679, 1173-1180, doi:10.1086/587859, 2008.
Rubin and Ford(1970)
Rubin, V. C., and W. K. Ford, Jr., Rotation of the Andromeda Nebula from a Spectroscopic Survey of Emission Regions, Astrophys. J., 159, 379-+, doi:10.1086/150317, 1970.
Sartre(1943)
Sartre, J.-P., L'Être Et Le Neant, Gallimard, Paris, 1943.
Schneider and Sagan(2005)
Schneider, E. D., and D. Sagan, Into the Cool: Energy Flow, Thermodynamics, and Life, University of Chicago Press, Chicago, 2005.
Shannon(1948)
Shannon, C. E., A mathematical theory of communication, Bell System Technical Journal, 27, 379-423,623-656, 1948.
Shannon(1993)
Shannon, C. E., Prediction and entropy in written english, in Claude Shannon collected papers, edited by W. A. D. Sloane N. J. A., p. 5–83, IEEE Press, Piscataway, NJ, 1993.
Shannon and Weaver(1949)
Shannon, C. E., and W. Weaver, The mathematical theory of communication, University of Illinois Press, Urbana, 1949.
Sheldon and Hoover(2005)
Sheldon, R. B., and R. B. Hoover, Evidence for liquid water on comets, in Instruments, Methods, and Missions for Astrobiology VIII, edited by R. B. Hoover, G. V. Levin, and A. Y. Rozanov, pp. 196-206, Proc. of SPIE Vol 5906A, Bellingham, WA, 2005.
Sheldon and Hoover(2006)
Sheldon, R. B., and R. B. Hoover, Implications of cometary water: Deep impact, stardust and hayabusa, in Instruments, Methods, and Missions for Astrobiology IX, edited by R. B. Hoover, G. V. Levin, and A. Y. Rozanov, pp. 6309-0L, Proc. of SPIE Vol 6309, Bellingham, WA, 2006.
Sheldon and Hoover(2007)
Sheldon, R. B., and R. B. Hoover, The cometary biosphere, in Instruments, Methods, and Missions for Astrobiology X, edited by R. B. Hoover, G. V. Levin, and A. Y. Rozanov, pp. 6694-0H, Proc. of SPIE Vol 6694, Bellingham, WA, 2007.
Sheldon and Hoover(2008)
Sheldon, R. B., and R. B. Hoover, Cosmological evolution: Spatial relativity and the speed of life, in Instruments, Methods, and Missions for Astrobiology XI, edited by R. B. Hoover, G. V. Levin, and A. Y. Rozanov, pp. 7097-41, Proc. of SPIE Vol 7097, Bellingham, WA, 2008.
Shor(1995)
Shor, P. W., Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer, ArXiv Quantum Physics e-prints, 1995.
Strobel(2010)
Strobel, D. F., Molecular hydrogen in Titan's atmosphere: Implications of the measured tropospheric and thermospheric mole fractions, Icarus, 208, 878-886, doi:10.1016/j.icarus.2010.03.003, 2010.
Susskind(2007)
Susskind, L., The anthropic landscape of string theory, pp. 247-+, Cambridge University Press, 2007.
Susskind(2008)
Susskind, L., The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics, Little, Brown, & Co., New York, 2008.
Verlinde(2010)
Verlinde, E. P., On the Origin of Gravity and the Laws of Newton, ArXiv e-prints, 2010.
Volders and van de Hulst(1959)
Volders, L., and H. C. van de Hulst, Neutral hydrogen in extra-galactic systems, in URSI Symp. 1: Paris Symposium on Radio Astronomy, IAU Symposium, vol. 9, edited by R. N. Bracewell, pp. 423-+, 1959.
Wheeler and Ford(1999)
Wheeler, J. A., and K. W. Ford, Geons, Black Holes, and Quantum Foam: A Life in Physics, Norton, New York, NY, 1999.
Wickramasinghe et al.(2003)Wickramasinghe, Wainwright, Narlikar, Rajaratnam, Harris, and Lloyd
Wickramasinghe, N. C., M. Wainwright, J. V. Narlikar, P. Rajaratnam, M. J. Harris, and D. Lloyd, Progress towards the vindication of panspermia, Astrophysics and Space Science, 283, 403-413, doi:10.1023/A:1021677122937, 2003.
Zwicky(1937)
Zwicky, F., On the Masses of Nebulae and of Clusters of Nebulae, Astrophys. J., 86, 217-+, doi:10.1086/143864, 1937.

Email It | Print It | Trackbacks (0) | Flag as Offensive

Comets and the Origin of Life: Part 3

My earlier triple post on "Voom!" has now been composed as a potential chapter in a Springer Verlag book on extremophiles and the origin of life. It is a 25 page chapter, whose draft can be found in PDF format here. But for those who want to read it in smaller chunks, I've broken it down into 4 sections and posted it on this blog:

The Cometary Hydrosphere (and the Origin-of-Life)

4. OOL Detection

What does it mean that there is information in those Fourier components, how does non-local information contribute to micron-sized life? Perhaps the way to think of them is as rogue" waves on the ocean. Most ocean waves are a meter or so high, but occasionally, with no warning, 10, 20 or even 30 meter high waves can topple a ship. Oceanographers call these rogue waves, and suggest that they form spontaneously as the reconstruction of many smaller waves that all arrive in phase. In the same way, each of these Fourier components of information can arrive in phase" with other information, so as to add up to a greater sum.

Such an interpretation of Fourier components assumes that there can exist an information wave" that propagates through space. This is precisely what comets represent in the universe, carrying water, carrying chemicals that have been processed by heat and liquid water, carrying genes that are being transported in bacteriophages and cyanobacteria, even carrying entire ecosystems identical to stromatolites. So the OOLP premise is that at some time=(t-1), no cubic millimeter in the universe had information above the threshold, but a collision" at time t increased the information in a cubic millimeter above our threshold.

Mathematically then, our OOL detector is a large calculation of the four-vector entropy density (where the time component is measuring anti-diffusive flow), summed over all Fourier components of interest. When this registers a spike above our threshold, we have the OOL. Then the OOLP is found by doing the same calculation summing over the entire universe for that first spike.

Now the reason for using four-vectors becomes apparent. In order to calculate the best possible number for OOLP, we will need to include all the time between the first star formation (which could then melt comets) and the 3.65 Ga isotopic identification of life on Earth. Using Einstein's block universe with time being just another dimension like the other three spatial dimensions, we can calculate the OOLP as a four-vector information over the expanding universe for those 8 Ga. As we described before, this additional volume and time barely changes the linear probabilities, adding a mere 22 zeroes to the OOLP, but it does add information in the Fourier components. A power-law decrease in information with scale length provides a modest increase in the Shannon information exhibited by life, though not enough to change our OOLP calculation by very much. However it does indicate a way in which comets can contribute non-linear information to the OOLP calculation.

How does this comet information wave" model differ from Darwin's warm pond? Darwin had all his chemicals in solution, a high entropy and low information situation, whereas comets keep all their chemicals locked in a deep freeze until the last moment, a low entropy high information system. Darwin added sunlight and heat to his pond to provide the energy for life, a high entropy energy source, whereas comets provide inhomogeneous chemicals barely at the melting point of ice, a low entropy energy source. One way to characterize the suitability of energy sources, is by calculating the Gibbs free energy or exergy, G=H-TS, where H is the enthalpy and T the temperature. Since G is inversely proportional to temperature, life prefers it cool, which is why trees evaporate 99% of the water they take in at their roots, a feat consuming 66% of sunlight energy, just to increase their exergy by cooling their leaves ({Schneider05}). And comets have their necessary liquid water at the maximum exergy possible--273K.

But most importantly, Darwin had no way for warm ponds to communicate. All the information had to be available locally, there was no method of communicating information, collecting information, or distributing it. Comets provide a mechanism for all these things, and in so doing, provide the network that permits Fourier space to influence real space, because Fourier space does more than communicate information, it also stores it. Another example demonstrates the importance of distributed information within a network.

Because the human brain has proportionately the same density of neurons as any other primate ({Houzel09}), it would seem that only brain size matters for intelligence. But most brain researchers argue that it is the number of cross-connects that make the human brain so versatile, not simply the neuron count. The information lies not in the number of nodes, (spatial complexity) but in the number of dendrites, the number of cross-connects (Fourier complexity). The 30 billion cells of the human brain, with its 10,000 cross-connects possesses about 10^15 synapses. But many of those synapses form loops, recursive non-linear structures which may be important to memory, so intelligence likely scales non-linearly as well, to some power in synapse number.

This non-linear behavior is a consequence not just of the connections between two neurons, but of the connections between connections, which form loops with more and more neurons involved. That is, if we model one neuron as possibly being in one of two states, firing or non-firing, then two neuron loops can be in 4 states, 3 neuron loops can be in 14 states, and so on. The number of states, or the information bits per synapse, is an exponentially growing function, as it is in maximally entangled quantum states ({Abrams99}). Then instead of 10^15 bits, we have perhaps 10^(10^15) bits per brain, all because of the cross-connects ({Linde10}).

Therefore comets may provide a distributed but connected web of information flow in the solar system, galaxy and possibly universe. This would permit the information content of the whole to be greater than the linear sum of the parts. In our terminology, this permits the Fourier-space information to dominate over the local and linear information content. By analogy to the OOL problem of the entire universe having only about 10^120 computational bits, we have achieved much greater computational resources of the universe by replacing the serial computation of a silicon chip with the parallel processing of a quantum (non-local) computer. It is precisely because a quantum computer incorporates entanglement between bits, these non-local and non-linear correlations, that it is able to outperform the local and linear silicon-based computers ({Shor95}).

But can even Fourier space provide enough information processing? Supposing the entire universe were a computer, with its 120 decades of information, we would need a non-linearity of the 10th power to get it up to 1200 decades for a moderately complex molecule, and non-linearities of 100th or 1000th power to achieve a minimal life form. If simple loops produce a quadratic power, then how many cross-connects" are needed to get 10th power or 100th power? Isn't that asking a lot from comets?

It is. Especially because the number of comets in the galaxy is not expected to be more than 10^21, or in the universe, 10^31. Collisions or cross-connects between comets are not expected to be more than 10, so we are not really asking comets to provide the information storage and processing, only the distributed information network which connects information rich regions. Once again, comets are macroscopic objects with some 10^39 water molecules, so they stand, logarithmically, about half-way between atoms and the universe in scale-size. Their purpose, then, is to provide the mechanism that connects information at the large scales of galaxies and stars, with the information at the small scales of cells and organisms. Without them, the Fourier space of large scales would be devoid of information, or at best, there would be no information flow between the smallest and largest spatial scales. Comparing the lost connections, we have the Earth being a volume of about one part in 10^59 of the universe, which logarithmically is roughly double the one part in 10^32 for the ratio of a microbe to the volume of the Earth. Therefore comets open up 200% more log-space volume for Fourier components.

To say it another way, comets provide a mechanism to connect the universe of Darwin's warm ponds together, so as to provide a unified information system greater than the linear sum of the parts. Comets, in addition to their linear importance in adding to the total number of Darwinian ponds, also provide the non-linear Fourier space information that connects information rich regions together, both at larger and at smaller scales. Without this connection, the Fourier series would truncate early, unable to connect the information on one planet with another, much less the information from the whole galaxy.

Cometary Abundances
The claim that comets connect large with small spatial scales should be elaborated, lest we fall back into the panspermia idea that comets merely transport microbes from one world to another, without providing an information source of their own (e.g., {Wickramasinghe03}). We distinguish our model that comets are an integrating complex information system necessary for OOL from the linear panspermia model by calling ours panzooia, where the prefix pan" refers to its non-locality, and the root zooia" refers to all life ({Sheldon07}).

Astronomical measurements of the motion of the stars in the Andromeda galaxy reveal that they are orbiting the center, but with non-Keplerian speeds such as found for planetary orbits around the sun. Rather, the stars seem to orbit as a rigid body, as if they are embedded in an invisible sphere ({Volders59,Rubin70}). The distribution of matter that permits such motion, is proportional to distance from the center of the galaxy, such that the funnel shaped" gravitational potential of a stellar source of matter is broadened into a flattened well, usually attributed to dark matter", or massive material that cannot be seen with astronomical telescopes. These rotation curves do not require modifications to Newtonian gravity, or invocation of non-baryonic matter (heavy neutrinos), they merely require a radially dependent star/mass ratio, where the galaxy becomes progressively more dusty" with radius ({Gallo10}). Since high stellar densities heat" the comet's velocity, say, by gravitational slingshots and jetting of gases by the comet, one would expect this radial profile for cometary density in galaxies. If one believes the virial theorem describes the temperature" of the heated cometary kinetic energy, then this is precisely the distribution expected for two 1/r^2 energy sources: gravity and radiation.

We refer to dusty" as indicative of dark matter that has not yet been observed by telescope. If it were actual micron dust grains, we could observe them in the infrared frequency range. If it were neutral hydrogen, we could observe them in the radio, or if heated, in the UV range. If it were compact objects--black holes, neutron stars, brown dwarfs--we could observe their gravitational microlensing or their occultation of background stars. As it is, we only detect them from large-scale gravitational effects of changing the rotation curves of galaxies, or at the galaxy-wide level, lensing the background galaxies. That is to say, we are looking for dark matter that is neither too finely divided that it extinguishes light, nor too highly clumped that it can be seen gravitationally; that has neither a large photon cross section, nor a large gravitational cross section. This means it has to be larger than a sand grain, but smaller than a Jupiter. Comets fit that description.

The best support for large numbers of galactic comets comes from observations of the bullet cluster", a colliding galaxy cluster. The collision produced a distinct shock wave in the heated hydrogen gas clouds, and a perceptible offset between the bright stellar center-of-mass and the gravitational lensing center-of-mass. With sophisticated modelling, the dark matter" ratio of cross-sectional area to mass can be computed from this data ({Randall08}). An upper limit puts the ratio at 0.7 cm^2/g. If we calculate this ratio for comets, and assuming a spherical comet of radius r, we have mass m=ρ (4/3 \pi r^3), and the cross-sectional area A=π r^2, giving a ratio A/m = 0.75/rρ. Plugging in a typical comet density of ρ=0.5 g/cm^3, we get r~2 cm or about 32g. This is a bit small for solar system comets, which tend to have radii about 2000m, or 10^5 larger than this. However the bullet cluster merely sets an upper limit, and the smaller this ratio, the better it fits the comet model. It also illustrates very nicely the interpolation of cross-sections between gas with radii 10^-9 m and brown dwarfs with radii r~10^8 m.

There is one other objection to galactic comets fulfilling the role of dark matter", and that is the assertion that 70\% of the matter in the bullet cluster or in the universe is dark" ({Clowe06,Angus07}). This would make comets and their associated carbon and oxygen more abundant than stars and their constituent hydrogen and helium, which would violate the 75:25:0.01 mass ratio of cosmological hydrogen:helium:metals production in the Big Bang nucleosynthesis (BBN). This is a serious problem for our galactic comet model, which can be resolved by either (a) following the current paradigm where 90% or more of dark matter is non-baryonic with small admixtures of comets consistent with Solar System abundances; (b) positing some early stage of galactic formation that burns H and He to C and O, which then form comets ({Gibson10b}); or (c) arguing that BBN models have not properly taken into account the plasma" age of the universe, between nucleosynthesis and neutralization of atoms. Our preference is (c).

For example, if strong magnetic fields exist, then plasma modes can provide degrees of freedom not available to the hot-gas models of BBN, thereby prolonging the ~20 minute era of GK temperatures, and providing non-thermal channels for nucleosynthesis to continue. This may have changed the H:He:C:O ratios, whereupon later condensation into comets would have hidden" the CO from spectroscopic discovery. That is, C and O are both sticky" elements, likely to form interstellar solids that are not easily detected spectroscopically. Furthermore, their volatility in the proto-solar nebula would have caused them to migrate anti-sunward during the accretion phase, so that they are underrepresented in stellar composition, and hence in spectroscopic observations of stars. In a now-discredited theory, Frank argues for the ubiquity of meter-sized comets in the solar-system, making many of the same sort of invisibility" arguments for cometesimals" ({Frank86b,Frank90}). Nevertheless, we find this to be a formidable objection, and therefore wait for a more comprehensive plasma-BBN model to address this issue quantitatively.

Comet Loops
Making the assumption that comets are not just ubiquitous but numerous in the early universe, we can now see how they can connect the large and small scales of the universe. Since the atoms of C, O, or their simple hydrogenized forms--CH4, H2O, CO and CO2--are all easily condensed, they would nucleate much sooner than giant hydrogen and helium clouds. As a consequence, gas clouds in the early universe would go unstable to accelerated gravitational collapse much sooner, and provide the superstructure of galactic clusters and voids observed today, as is provided by dark matter" in all the cosmological models.

Galaxy clusters, such as the Coma supercluster, must form before the galaxy begins stellar formation, they require a dark matter seed ({Zwicky37}), which can be provided by comets since they are the first to condense out of the proto-galactic nebula. The same is true at the smaller sub-galactic scale of globular clusters, whose stars are generally much older than the galactic disk. Since globular clusters have higher average stellar velocities than galaxies, and cluster galaxies higher than field galaxies, we would expect these clusters to evaporate comets with much higher relative velocities, enabling these high-speed comets to seed neighboring galactic nebulae. This self-seeding or catalytic character of comets is similar to diffusion limited growth, and may account for some of the cosmic galactic structure such as the great wall" which is presently attributed to unspecified dark matter."

As stellar formation began in proto-galaxies, the immediate heat flux would drive the comets away, due to gas jetting on the surfaces of comets. Thus comets have a built-in repulsion for stars, which today we observe in the galactic rotation curves discussed earlier. The greater the repulsion, the more likely that comets will evaporate" from galaxies, and not contribute to star formation. Not coincidently, this repulsive force" depends on the spectral reflectivity of comets, and the big surprise in the past 25 years was the discovery that old" comets are blacker than carbon soot. This makes them maximally sensitive to thermal radiation, and may be a consequence of cyanobacterial biofilms forming on the outside of the comet ({Sheldon06, Sheldon07}). That is, the spectral characteristic of comets that makes them more efficient galactic messengers is itself a consequence of life.

Therefore just as Gaia theory argues that Earth climate is stabilized by life, so it may be possible that galaxy formation was itself catalyzed by life, making this universe with all its anthropic contingency merely a consequence of homeostasis. Whether this hypothesis bears up or not, we present it as an example of how the largest scales observed in cosmology can be connected to the smallest scales of biology through the mediation of comets.

We began this discussion on comets by describing them as the messengers of the universe, much like the neurons in the brain, connecting the spacetime pixels of potentially spatial information to produce dynamical information, populating the matrix of Fourier transform information. We ended by arguing that life could modulate galaxy formation in such a way as to make the universe hospitable for advanced life, a combination of Strong Anthropic Principle and Gaia Hypothesis ({Carter74, Lovelock74}). But perhaps a better way to view this emphasis on Fourier space information is to recognize that the macrocosm mirrors the microcosm; that the universe bears more than passing resemblance to the cell, with comets providing an analog of the tubulin proteins that give shape and structure to the cell and are the highways for non-diffusive information transport. Thus science may recover the medieval golden chain of being that connects the earth and the heavens ({Lewis64}).
Email It | Print It | Trackbacks (0) | Flag as Offensive

Comets and the Origin of Life: Part 2

My earlier triple post on "Voom!" has now been composed as a potential chapter in a Springer Verlag book on extremophiles and the origin of life. It is a 25 page chapter, whose draft can be found in PDF format here. But for those who want to read it in smaller chunks, I've broken it down into 4 sections and posted it on this blog:

The Cometary Hydrosphere (and the Origin-of-Life)

2. OOL Probability and Linearity

One of the many difficulties in discussing the OOL problem, is that we confuse the theory with the practical, or the immaterial with the material. The combinatorial problem described above is theoretical, because no chemical reaction, no cellular biochemistry proceeds in the manner described. On the other hand when Venter announced his synthetic bacteria ({Gibson10}), using non-organic DNA (from machines which nonetheless use biologically derived reagents), the DNA fragments by themselves were useless. What they did next was to insert these fragments into a living yeast cell so as to reassemble the 1078 pieces, and then inject that repaired DNA into a related bacterial species whose own DNA had been removed. One hurdle that took many months to solve was a result of a single missing codon. Nowhere in this experiment was there a theoretical problem similar to the combinatorial math of the OOL problem, rather, all the biological protein machines were running and operational when the sleight-of-hand to change out the DNA occurred. Venter's success was not randomly finding a sequence, but converting the immaterial sequences into living biological material.

The combinatorial problem assumes that there is a fixed target that we are to search for blindly, like a needle in a haystack. By contrast, the Venter problem was to swap one DNA for another in a living organism, reminiscent of electricians who rewire factories without turning off the power. The Venter approach began with a living chemical environment and tried to change it without killing it, whereas the combinatorial approach began with a dead chemical environment and attempted to enliven it without trying. The latter is an attempt to find life without a driver, whereas the former is an attempt to keep life going while switching drivers.

If we know the sequence we are after, then like the Venter Institute, we can produce that DNA after due attention to quality control. But if we don't know the sequence, it will take a very long time to find it. The OOL problem has been stated as the difficulty of randomly finding the right sequence. Many computational biology approaches have been proposed as smart" algorithms for finding the living" sequence, but as Dembski argues, all these programs (Dawkins' Weasel, Ev, Avida) smuggle in information that helps with the search ({Dembski10}). In fact, the No Free Lunch" theorem proves that without prior information, there is no smart" algorithm that can outperform a random search, which is where we started our discussion ({Dembski04}).

But perhaps the problem is assuming some sort of maximally random warm pond" as the starting point, and attempting life in one step. If an information-rich substrate, perhaps a clay, or a coacervate, permits the addition of information that leads to OOL, then the formation of life is much closer to Venter's problem, that of adding information without losing what is already there. That is, the smuggling" of information, which is the bane of Dembski's algorithmic analysis, represents the pinnacle of Venter's experimental accomplishment.

For the OOLP calculation does not need to begin at zero and in one jump make it to Mycoplasma, rather, it may be possible to combine two information-rich subsets--say coacervates and RNA ribozymes"--to produce life. So both smuggling" and finding" contribute to the OOLP calculation. We are not saying that breaking down an improbable string into substrings changes the probability of forming the final string, only that adding up substrings" possesses probabilities as important to OOL as finding the final string. The OOL problem is exacerbated, not reduced, by including the probability of experimentally adding" information. For as the Venter Institute reported, it was quite difficult to smuggle in information, requiring real reagents manipulated in vitro with real organisms in growth media, rather than manipulating abstract symbols on a computer.

The probability of OOL is proportional to the probability of finding the minimal sequence multiplied by the probability of the method producing that sequence. The information from experimental production of the proper sequence is just as important as the information in the sequence. Since there are a great many ways to make substrings and add them together, each with its own probability, the OOLP must be the most probable method selected from all the possible paths to that destination. That is to say, while we cannot make the discovery of a long string of peptides more probable by breaking it into substrings, we can make the manufacture of that string of peptides more probable by breaking it into substrings. And non-linear production mechanisms have the potential to be the most probable.

If it is comets that transport the reactants for OOL, then just as a dimerization reaction proceeds at a quadratic or non-linear power of the density of reactants, so the density of comets functions as a non-linear factor in both the abiotic (purely chemical) OOL pathway, and the biotic or quasi-biotic evolution pathway. But whatever non-linear mechanism we invoke should be indifferent to where the living versus non-living boundary is placed, it should still generate sufficient probability to overcome the barrier. Comets fulfill this role nicely, providing the non-linear delivery of reactants for an abiotic OOL synthesis or the non-linear delivery of genes for biotic evolution. In both cases, it is a density dependent non-linear function that has the potential to overcome the linear barriers to OOL.

3. Information Restatement of OOL

Despite the simplicity of modelling life on the molecular reactions needed to produce a living sequence, it would be inaccurate to quantify the probability of starting OOL only by the density of reactants, which ignores the hidden effort of the biochemist, who when abiotically synthesizing an important biochemical, carefully isolates the products from the reactants, performing several purifying steps for every synthesis step. Quantifying these actions of the chemist is analogous to a physicist's calculation of entropy. That is, purification produces no new products, but does reduce the entropy of the products at hand. The inverse of entropy is information, so whenever making an OOLP calculation, it would seem convenient to keep track of the information content.

In fact, it would be mathematically advantageous to cast the entire OOL problem as a problem of information, where life is assumed to be a highly informational state of matter. Then one could calculate OOLP quantitatively over the entire space from low information dilute chemicals to high informational life, and from the beginning of the time interval to the end.

This recasting of the OOL problem as a change in information content has several other advantages as well, making it independent of material details (viruses versus cyanobacteria) or temporal details (RNA-world versus metabolic pathways first). We merely set some informational threshold, and argue that when the information in the system exceeds that threshold, we have OOL. Of course, life also concentrates that information into a small volume, so perhaps we should restate the OOL problem as passing a threshold of information density achieved somewhere in our volume. This is still not quite right, because a dead bacterium may have the same information density as a live bacteria, yet be completely unable to propagate, and therefore not alive." So we should further refine our threshold to be information density flow, where spatial derivatives are used to establish the density, and temporal derivatives determine the flow.

If this information density is so very improbable, then the exact level of the threshold is unimportant, because the gradient should be so very steep. If information is measured in probability units", we could set it at 10^150/cubic micron or 10^15000/cubic micron with no real difference to the outcome. Likewise, the diffusive entropy flow should be enormous at sharp gradients, so the mere fact that a cell doesn't rapidly dissipate with time is a signature of strong informational flow. The entropic dissipation is a function of the strength of the gradient and the local temperature, so for the ease of computation, we normalize the information flow to the expected gradient driven dissipation flow, with life demonstrating a flow of opposite sign to dissipation, and slightly greater than the expected dissipation flow. Note that for freeze-dried or lyophilized bacteria, the entropy flow is so very small that this living information flow in the opposite direction may be virtually undetectable, but nevertheless may exist even in a state of suspended animation.

This discussion has been necessarily qualitative, but considering the many orders of magnitude involved in the OOLP calculation, we do not think we have oversimplified the problem yet. The OOLP problem can then be stated as an appearance of very high informational density that simultaneously has a negative informational flow slightly larger than the expected positive entropic decay rate. Calculating this quantity, then, will require a calculation of information density over all space, and its time evolution or temporal derivative. In the next section we discuss this calculation mathematically.

Shannon Information in Spacetime
In a series of ground-breaking papers, Shannon developed Information Theory" from scratch, developing it to describe the carrying capacity of telephone cables, and then applying it to the English language as a paradigm case ({Shannon48, Shannon49}). In the ensuing development of the mathematical theory, there tends to be two simplifying directions of his research: calculating the information capacity (spatial derivative of the telephone cable); and calculating the informational flow (temporal derivative of the signal).

As an example to clarify the difference between capacity and flow, consider the coaxial cable used to bring cable television into the home, which has a higher capacity than copper twisted pair. On the other hand, over time the bit rate of twisted pair went from 300 baud (bps) acoustic modem, to 1200 bps digital modem, to 9600 bps maximum" for vocal frequencies, to 56kbps for digital compression. Each time a new modem arrived we were told this was the theoretical maximum for twisted pair. Today we have twisted pair carrying DSL at 1-4 Mbps, though only by upgrading the old telephone switches. This increase in bandwidth is not a function of time-independent geometry, as seen in the coax versus twisted-pair comparison, but a function of frequency and compression algorithms that are able to make each bit carry more information by relating it to the bits before and after it. Making a graphical analogy to water pipes, the coax is a wider pipe than twisted-pair, whereas the improvement in twisted-pair modems is a faster flow or greater pressure. Therefore information theory involves both a spatial and a temporal component, which are connected by the speed of the information carriers: electricity for telephones, sound-waves for liquids, chemical-waves for biochemistry, and comets for astrophysics.

What exactly is this information which Shannon described? Shannon began by characterizing the noise on the telephone line as a binary bit stream. Noise generally comes from thermal fluctuations, which are described by a Gaussian distribution, and is often either white" where fluctuations are independent of frequency, or brown" where fluctuations are inversely proportional to frequency. Very crudely then, signal is what remains when the noise is subtracted out. This means that signal is strongest on the wings" of the Gaussian, the more improbable the distribution, the better the signal/noise ratio. As the number of particles gets larger, and a mole of molecules is already 10^24 particles, these Gaussians get extremely steep, making it much easier to manipulate the logarithm than the quantity itself. Boltzmann defined entropy = constant * logarithm of the number of states in distribution (and had the equation
S = k ln(Ω)
engraved on his tombstone) so that Shannon's definition of information merely took the reciprocal of this number, which becomes the negative of the logarithm, or what he called negentropy".

Relating this negentropy to the distribution of chemicals in a warm pond, a smooth and dilute distribution is the most probable, and hence the noisiest" or most entropic distribution, whereas a concentrated spot of chemicals is the least probable and therefore the higher negentropy information content. Using the appropriate scale length, we might say that the information in a particular dissolved chemical is its local concentration divided by the expected average concentration. But note that this is a time-independent measure, this is the width" of the information pipe, not its pressure."

To calculate the information pressure" of this chemical concentration gradient, we have to compare it to the state immediately before and the state immediately after. If the states are describable by a simple law, say, the diffusive motion of a chemical gradient, then the entropy increases, and the information decreases. If, however, there is no physical law that connects these states, or more precisely, the greater the deviation from the physical law of diffusion (df/dt = D d^2f/dx^2), the greater the information content in these adjacent states.

Shannon does this calculation for us in a 1951 paper on the information content of written English ({Shannon51}). To calculate the spatial information of written English, we can employ the same statistics as cryptographers, looking for the occurrence of specific letters, pairs of letters, triplets of letters and so forth. This is a static analysis independent of position, and is the standard cryptographic technique for cracking a substitution cipher. But Shannon wanted to know how correlated are the letters if we know the code. That is, a computer can tell us that the letter q" is highly correlated with a following u", but could it, say, determine that this rule is violated for Chinese names? A human could, so Shannon worked with human decoders of texts that had letters removed. He wanted to know how much information was encoded in these longer range correlations. In our example, how many letters does it take to decide the word is likely a Chinese name instead of a Latin-root language? Although Shannon worked with written texts, these same rules apply to spoken texts, which make this experiment also a study of time-dependent information content.

Are the spatial and temporal ways of measuring information really independent? A recent paper on the undeciphered pictograms of the Picts, demonstrates their independent character ({Lee10}). The question posed by these carvings was whether they represented a picture, a hieroglyphic/pictogram script, or a syllabic language. Examples of each script type were collected, and the single-symbol frequency statistics (spatial) were plotted on one axis against the statistics of the following symbol (temporal) on the other axis. Each type of communication occupied a distinct cluster on the graph, and the Pictish symbols were adjacent to syllabic languages, suggesting a communication form midway between hieroglyphics and a syllabic language. The key point is that a circular cluster for each communication method, rather than a long ellipse or line, indicates that the two axes are relatively independent so that time and space correlations carry different information.

Applying this to our definition of life, we argue that both spatial and temporal correlations carry information. Not only does the cell exist as a distinct arrangement in space, but this arrangement persists in time, unlike random features seen in clouds or tea leaves. Now in order to avoid prejudicing one type of information over the other, we use Einstein four-vector notation to lump the temporal and spatial components together. Then our generalized Shannon information looks like:
[1]    Sα = k ln (Ωα); where α = (0,1,2,3)
When α=0, then we are looking at the time component, where the information is proportional to the deviation from the diffusion equation, e.g., the magnitude of the negative diffusion coefficient needed to keep the structure from dissipating. Boltzmann wanted entropy to be in units of energy per kelvin, which accounts for the units on k, which in this four-vector notation would have k_0 also include the speed of light.

Since Sα is a function of the density of states, and density depends on whether the observer is moving with respect to the particles, we define a relativistic invariant for the information:
[2]    I = Sα Sα
where the factor k--> k'/2 keeps the normalization.

Fourier-space Information
An analysis of the Shannon information above, reveals that it is a local quantity. It depends upon sharp gradients in space, and the maintenance of these gradients in time (rather than diffusion spreading them out.) But all these descriptions depend on nearest neighbors, they do not incorporate any global knowledge. By these criteria, the frost on a windowpane would have claim to be life as well, nor would a vat of beer yeast have any more information than the man who shovels it out. We need some sort of global indicator of information to indicate when all this information spread over a large space is correlated.

In Shannon's 1951 paper, he looked at long-range interactions in English words, how a letter two places removed from the missing letter influenced the prediction, or how a letter three places removed influenced it. In standard communication textbooks, these correlations are referred to as 2nd order, 3rd order, etc ({Thomas91}). We can generalize this long-range correlation as a Fourier transform, where 2nd order terms connect every other point, and 3rd order terms connect every third point etc. It isn't necessary to use sines and cosines as Fourier did, only that there be a transform with a basis set that covers all possible long-range correlations. Then just as nearest neighbors can have information in this density of states, so also non-nearest neighbors can have information in the transform of the density of states.

Note that the zeroth order term in such a transform is just the same local term we described above. Thus the information quantity we are interested in looks something like:
[3]       I = Sα Sα + Σ1L Fi(Sα Sα) = Σ0L Fi(Sα Sα)
where Fi() is the transform at some spatial scale i, and L is the limiting scale size.

Does the information in these various modes add, as we have assumed? From physics, we know that the entropy, S, is simply additive for volumes, so the information in different volumes is also additive. We have made a weak argument that information is additive for temporal constancy (a negative diffusion coefficient), which seems odd that something that doesn't change is increasing in information. But the time dimension has different units, and what is important is that the information doesn't disappear at the diffusion speed. Like the Red Queen in Lewis Carroll's masterpiece, it takes energy to maintain homeostasis, and that energy expenditure (divided by temperature) is a negative entropy flow which is information. So the spatial and temporal entropies add.

Does the Fourier components of information add as well? Yes, it would seem natural that it is additive, though the units (inverse space and time) are not the same, nor the magnitudes equal. In Shannon's 1951 work, the information per added letter in an English sentence dropped from some 4.8 bits for the 2nd letter to 1.2 bits for the 10th and following letters, which looks a lot like an autocorrelation function, and doesn't show any peaks due to average word length or sentence length. The key point is that Shannon estimated this information by subtracting the information in the (n-1) letter from the nth letter, making the assumption that the information in the longer correlation lengths was additive.

So if the information in all these different modes is additive, and they all correspond to a logarithm of a density function, then we can create a density function for each mode and multiply these densities together. That is, if the transform of the logarithm is the same as the logarithm of the transform, then this sum can be replaced with a product,
[4]       I = k' ln Π0Lαi Ω)
where i signifies the basis vectors of the transform space.

How much do these other terms add to the total information? From Shannon's estimate of information in added English letters, we see a drop of about a factor 4 from local or zeroth order component up to the 5th and then remain constant for later components. Shannon finds the information in each order by subtracting the previous order from the calculation, thereby assuming that each order adds information or negentropy, so that the density of states would be multiplied.

Using the result from English, that higher orders show a gradual decline from the peak at the zeroeth order, we estimate that each decade of L contains the same amount of information, giving a power-law dependence of information on scale-size. Then starting at an atomic scale of 10^-12 m=1 fm, we would have about 38 decades up to the scale of the universe. This should probably be done for relativistic 4-volumes instead of lengths, so that the information in the Fourier components is about 152 times greater than that in the zeroeth order. If we reexpressed it as a density of states, then we would say it is equivalent to a very high power of density,
[5]          I ~ k' ln (Ωα Ωα)^152

This rather heuristic approach can be physically motivated by considering a series of abiotic chemical steps that can hypothetically make life in a testtube. If we have a reaction series, k_1[a][b]--> [c], and k_2[c][a]-->[d] taking place in a single flask, we could write it as k1*[a]^2 *[b] *k2-->[d] where the reaction rate or probability is non-linear in [a]. Of course, some reactions may destroy the products much like atmospheric chemistry, so the expected output is found by solving a large matrix of coupled equations. This matrix is just a more accurate physical description of the independent and equal probabilities we had used in our earlier description of searching for arrangements that are alive". The key difference is that we now note that the different arrangements are not equally probable, nor are they independent. This means we have to abandon our linear approach of adding probabilities, and consider the impact of non-linear terms.

But if we lack the computational resources to find the correct reaction, how does it help if we add in the lack of experimental ability to even produce the correct reaction? It helps because there are both linear and non-linear synthesis pathways. And if a non-linear pathway exists, then under some set of conditions, it will dominate the linear path. That means two probabilities collapse down to one which is more likely. It is a method of improving the OOLP, by picking out pairs of probabilities and replacing them with a better one.

Isn't it even more speculative to talk of non-linear synthesis? No, because we can examine the end product for examples of duplication, which would be the result of a non-linear input. Note that duplicates can occur at any size scale, they can be aa" or abab" or aaabbb" and so forth. Finding such items in a data set uses tools like autocorrelation functions which are calculated with Fourier transforms, or fractal analysis over wavelet" basis vectors. The specific technique is not as important as the concept that information about duplicates and their compression of the linear probabilities can be found in transform-space". Just as structure can be found locally by taking local gradients, so also duplicates can be found globally by taking Fourier" components, which appear in the calculation as non-linear exponents on the densities.
Email It | Print It | Trackbacks (0) | Flag as Offensive

Comets and the Origin of Life: Part 1

For those of you who have been wondering what became of this blogger for the past month, my earlier triple post on "Voom!" has now been composed as a potential chapter in a Springer Verlag book on extremophiles and the origin of life. It is a 25 page chapter, whose draft can be found in PDF format here. But for those who want to read it in smaller chunks, I've broken it down into 4 sections and posted it on this blog:

The Cometary Hydrosphere (and the Origin-of-Life)

Abstract

The triumph of Democritean materialism over biology in the 19th century was tempered by the discovery that time was not eternal and that life was too complicated to spontaneously organize. This led to the paradox of assuming only material causes for life's origin while making them practically impossible. We address this 150 year-old origin-of-life (OOL) problem by redefining it as an information threshold that must be crossed. Since Shannon information has too little capacity to describe life, we redefine it to include time and correlated information. When this information quantity is generalized to Einstein's spacetime, then information capacity implies information flow. Mathematical flows need a material carrier, which from recent discoveries of fossilized microbial life on carbonaceous meteorites, we infer to be comets. With sufficient cometary density, which we hypothesize may be supplied by the missing galactic dark matter, non-linear correlations permit comets to assemble life from distributed information sources. If information is conserved, as suggested by many cosmologists, then this distributed source becomes the boundary condition of the 4-sphere describing the Big Bang. Recent advances in theoretical physics suggest that the assumption of the conservation of information along with the conservation of energy are sufficient to derive Newton's laws, making materialism a corollary of information, and the OOL problem a trivial result of imposed Big Bang boundary conditions as transmitted by comets.

1. Introduction

The origin-of-life (OOL) problem has been traditionally viewed as a informational barrier, whereas comets, when they have been considered at all, have been treated as passive carriers of life. In this paper we attempt to show how OOL and comets form a single system that interact synergistically as both information and transportation.

The Origin-of-Life Problem
After Darwin's success ({Darwin1859}) at reviving Lucretius' Materialism ({Lucretius50}) with its rejection of teleological or vitalist explanations for evolution ({Davies00}), there arose a paradox on the origin of that first life. On the one hand, Darwin rejected any inherent property of matter that made it alive; it had to be a naturalistic spontaneous generation from non-life. But on the other hand Pasteur demonstrated that life always came from life, that spontaneous generation did not easily occur ({Pasteur1861,Farley74}). Darwin acknowledged the problem, but merely expressed a belief that under the right conditions (a warm pond) and with sufficient time (eternity), spontaneous generation could still be likely ({Darwin1887,Pereto09}).

Subsequent discoveries have not been kind to Darwin's estimate. The age of the universe has shrunk from eternity to 13.7 Ga ({Komatsu10}), and the complexity of the first living cell has grown astronomically from the protoplasm" imagined by Darwin to the complexity of modern biochemistry ({Meyer09}). Despite good early evidence of the liquid water environment (warm pond), a complete set of cellular nanomachines needed for life would require extensive assembly and dynamic initialization ({Polanyi68}). To expect a proper assortment of pieces to randomly assemble is estimated in various places to have a probability of less than one in 10^41000 chances ({Hoyle99}). These are not even astronomical, these are cosmological improbabilities, as illustrated by the following example.

Suppose that the Venter Institute is successful in producing a stripped-down Mycoplasma with a mere 1000 codons describing a minimally functional 1000 amino-acid protein set (their synthetic M. mycoides JCVI-syn1.0 genome had 991,920 codons, {Gibson10}). Further supposing one had a computer that generated random arrangements of 1000 codons and then tested each for likely life", how long would it take to find the right arrangement, where likely" is a probability of one-half? Since there are 20 possible amino acids, a 1000 long chain has 20^1000 = 10^1301 permutations. Supercomputers today are capable of petaflops" or a million billion instructions per second ({Jaguar09}). There are about 31 million seconds in a year, so if each instruction is a test of a random sequence, we have about 10^22 evaluations in a year. At this rate, 10^1301 tests would take 10^1279 years, or much longer than the 10^10 years that the universe has existed.

We know that computer chips are getting faster and smaller, so could such a computer be built in the future, even if it is impossible today? There are physical limits on speed and size, the most rigorous physics limit being the Planck time" given by quantum mechanics for the shortest interval of time that has any meaning, or about 10^-43 seconds. Then the maximum number of time intervals from the beginning of the universe is 13.7 Ga * 31Ms/a * 10^43 intervals/s = 10^61 intervals. We will further assume that at least one electron or elementary particle has to be involved in a calculation so the number of computed bits available must be no greater than the number of particles in the universe, or about 10^80. Their product is 10^141 maximum computer calculations, assuming the entire universe were a quantum computer ({Dembski98}). More careful estimates applying the limitations of general relativity on the quantum physics give the computational capacity, or stochastic resources of the universe, to be about 10^120 operations on 10^120 bits ({Lloyd02}).

This result sums up the current impasse in OOL research: randomly generating the proper arrangement of a 100-peptide enzyme is outside the computational abilities of the universe, much less the 991,920 long codon of Mycoplasma. So not only has it proved difficult to create life in the laboratory, or find a mechanism to spontaneously generate it, but theoretically it appears hugely improbable that a random search can ever find it.

The usual counter-argument to these cosmological improbabilities is to argue that there are many more arrangements of the basic building blocks that are alive. That is, just because the minimal life form we chose has a specific arrangement of 1000 amino acids representing a one in 10^1301 probability, doesn't mean that there aren't 10^1270 other arrangements of those 1000 amino acids that are also alive". Thus the OOL computation need only find one of those other possible permutations, which would increase the odds and make spontaneous generation feasible.

The counter-counter argument, is that the putative ubiquity of living" permutations should cause spontaneous generation to be observed frequently, which it hasn't ({Pasteur1861,Hoyle99}). Or it should leave behind a body of alternate forms for these basic proteins and gene coding, which it hasn't ({Meyer09}). Furthermore, a laboratory that randomly permutes enzymes and genomes should frequently produce viable organisms, which also hasn't happened ({Barrick09}). Rather, mapping out the viable organisms in permutation space", reveals a tremendous desert of unviable arrangements ({Axe04}). Life appears to be highly specific, ordered, and particular, which puts severe limits on the ease of randomly making life.

An alternative counter-argument suggests that we live in a space" of many universes, each one birthed by random fluctuations of the vacuum. Since a fluctuation has a certain Planck-length" volume, then at any given instant, the universe can be divided into cubes of size 10^-35 m, or about [10^61]^3 cubes today. If we argue that a new universe can form in one Planck time, and each universe can spawn more universes, then we have an exponential series from our Big Bang onward, (1 the first instant, 2^3 the next, up to [10^61]^3) for a total less than 10^243 similar" universes ({Guth07,Susskind07}). String-theory with its 10 dimensions achieves a larger number of about [10^61]^10, though Linde believes this number to be surpassed by entropy considerations of the cosmological constant, or about 10^[10^82] universes ({Linde10}). Of course, our universe might be unmeasured years after some original" Big Bang, so in principle the number of multiverses in the landscape" may be infinite and time eternal.

There are numerous difficulties with the scenario sketched out above. For one thing, once infinities are posited, it is difficult to find any one solution, since all solutions are now possible. Then the fastest growing solution becomes the dominant one, and the discussion changes from finding one solution to the difficulty of finding the one with the fastest rate when the physics is variable. For example, supposing in one of those infinite universes there comes into existence a being with the ability to communicate between multiverses, and therefore can inject information into any particular universe. By definition, this behavior would be supernatural, polluting the undirected search for life in our universe. Thus the entire purpose of the multiverse--the random production of life--is destroyed by the multiverse hypothesis.

Ignoring this incoherence of infinities, there is significant doubt that universes can actually appear in the vacuum as hypothesized, since energy is not conserved in this model ({Pitts09}). Furthermore, the minimal life recorded by Venter has combinatorial information at least 10^2,281,000, not including the dynamical information, or the permutations of fine tuned" physical constants that add at least another factor of 10^1000. Nor is it clear that each multiverse would sample the solution space evenly, or even whether dynamically interacting systems can be constructed from non-interacting random steps. Thus it would appear that the multiverse solution of reintroducing Democritus' infinities produces more problems than it solves.

The arguments and counter-arguments do not agree on the density of viable arrangements in protein phase space, they do not agree on the minimum codon length needed for life, or even the nature of the universe. But this lack of agreement should not distract us from recognizing two common characteristics of the debate: first, the OOL problem involves astronomical probabilities in which incremental progress is measured in factors of ten billion; and second, a successful OOL theory hinges on ways to bring these astronomical probabilities down to earth. In this paper, we argue that comets can accomplish this, though not without a cost.

The Cometary Hydrosphere
Over the past several decades, it has become increasingly apparent that life does not reside solely on this planet Earth, but probably exists throughout the Solar System on many rocky bodies possessing liquid water ({Levin76, Coates10, Strobel10, NASA2010}), and even more importantly, on numerous, small icy bodies, called comets when they cross Mars orbit, melt, and acquire visible tails ({Hoover04, Hoover08, Sheldon05}). Comets distinguish themselves in several ways from rocky bodies: they have short summers" when they come near the Sun and melt, followed by long winters" far from the Sun when they refreeze; they explore a much larger volume of space in their orbit; they accrete material along their orbit, their lifespan" is much shorter than rocky bodies; their death" involves disintegration into many smaller fragments; and they are frequently ejected from the Solar System gravitational well ({Sheldon05, Sheldon06, Sheldon07}). These properties of their life cycle" cause them to behave differently than rocky planets.

Supposing that a significant fraction of melted comets become infected with life, then this cometary life cycle supports an entire cometary biosphere that is able to survive, spread, and transport life potentially across the galaxy, possibly from the moment when stars began to form 12 billion years ago ({Sheldon08}). This potential cometary biosphere can then interact with the OOL problem in several fundamental ways. For example, if comets can transport life between rocky bodies, then if life is found on different planets, it is not necessary to hypothesize that life began independently on the two planets, for it could have begun on either rocky body and spread to the other, or even begun on comets and spread to the rocky bodies. Therefore transportation changes the OOL problem by permitting a much larger volume of space to be involved. But transportation does more than allow a greater volume of the universe to be involved, it also allows a greater timespan (distance divided by comet velocity) to be involved. Thus comets integrate the entire volume of the galaxy (with a smaller probability for multiple galaxies to be included) over nearly the entire time since the Big Bang into the OOL problem.

If we hypothesize that the OOL probability (OOLP) scales as the probability of a rare event times the number of locations and the amount of time, then this inclusion of a galaxy of locations existing over the entire time since the Big Bang increases the OOLP by approximately 10^22 compared to the probability of forming only on Earth. This number can be estimated very roughly by calculating the ratio of Earth OOLP to cometary OOLP, which means comparing the ratio of time intervals and volumes available for life (to begin first) on Earth, to that of comets.

Calculating the maximum amount of time available for Earth OOL gives a time interval between the molten-rock Hadean Age at the end of the planetary bombardment of 3.85Ga BC and the first appearance of bio-fractionated carbon at 3.65 Ga BC ({Mojzsis03}), or about 200 million years. A similar calculation for cometary OOL would start from star formation some 12 Ga BC to the same spot or about 8 billion years. Then the ratio of time intervals for cometary/Earth OOLP is then 40 times larger.

Likewise, an estimate for the volume of cometary water around the planet Earth since the Hadean is approximately equal to the volume of ocean water in the planet Earth today (see {Sheldon07}). If each star system has a similar amount of cometary water, then we can multiply by the number of stars in the galaxy (and assuming no other rocky planets with oceans) gives us about 10 billion times more volume in galactic comets than on the Earth. If we further assume that other galaxies were accessible by comet (which is uncertain, because the high velocities of intergalactic comets needed to cover the distance preclude capture into the gravitational well of a target solar system), we can increase this number by another 10 billion to account for the number of observable galaxies in the cosmos. Then the ratio of volume cometary water over Earth water increases the cometary OOLP by about 10^20. Finally, combining the time and volume ratios gives a rough estimate of a 10^22 increase in cometary over Earth OOLP.

We could increase this slightly by assuming a distribution of rocky bodies with liquid water oceans throughout the galaxy, but all these refinements hardly change this number by more than one order of magnitude, which when compared to estimates of Earth OOLP <10^-1301, provide insufficient progress in solving the OOL puzzle. Note that these refinements are all linear adjustments" to the OOL probability calculation; they scale directly with volume or time duration. As we discuss later, it is enticing to consider whether the 70% `dark matter" of the universe is composed entirely of comets, in which case, we would have to increase our estimate of the cometary hydrosphere by another factor of about 10^7, and yet would have made little progress on raising the OOL probability close to 1/2. That is, assuming the most radical changes to cosmology that incorporates every cubic centimeter of potential water into the OOL calculation, would hardly move the resulting probability.

These sorts of considerations suggest that the OOL problem will not be solved by tweaking linear factors. If time and space are only linearly correlated to the probability of OOL there will be no solution to a method of reducing it to 1/2. However, there may yet be non-linear corrections to the OOL probability made possible by the discovery of the cometary hydrosphere.
Email It | Print It | Trackbacks (0) | Flag as Offensive

The End of Christianity, part 3

This (l-o-n-g) blog reviews the recent book by William Dembski, The End of Christianity. The first two parts of this blog describing the theological problem are here and here. Summarizing, Dembski discusses the internal debate among devout protestants between young earth creationists (YEC) and old-earth creationists (OEC) as a question of whether text or science is a more reliable source of truth (epistemology). We argued that the epistemology question also hinges on a different views of reality (metaphysics). Thus, for example, even if we assert that text trumps science, the meaning of a word depends on what we think is the proper reference for it, which of course, depends on observations. Despite science being kicked out the front door, it sneaks in the back. So if we are going to resolve this YEC/OEC conflict, it isn't enough to win the epistemological trump suit, one still has to collect a metaphysical winning hand.

Dembski's book is organized into 22 short chapters, where each is almost independent of the others. So, for example, he deals with Special and General Revelation in chapter 8, and the primacy of text in chapter 11. There is nearly a circular argument in these independent chapters, where he first establishes the necessity of science the Godhead (metaphysics), then establishes the primacy of text over science in revelation (epistemology), and finally derives the interpretation of texts from science (metaphysics). There formal solution to this circular reasoning, with God standing behind both word-definitions and science-creations comes with the price of a serious ambiguity concerning God's relation to time. The epistemology argument is clear, but the metaphysical argument is muddy.

This isn't too surprising, since the Church has had 2000 years of clear epistemology, but only 150 years of metaphysical challenge. If Dembski is unclear, it is because we have had a century of theologians weaving and dodging the metaphysical pummelling of materialism. To quote GK Chesterton on the 20th century approach,
If it be true (as it certainly is) that a man can feel exquisite happiness in skinning a cat, then the religious philosopher can only draw one of two deductions. He must either deny the existence of God, as all atheists do; or he must deny the present union between God and man, as all Christians do. The new theologians seem to think it a highly rationalistic solution to deny the cat.
Replacing Chesterton's memorable phrase "that a man can feel..." with the ploddingly pedantic "that evil exists", then his conclusion is clear, "the new theologians think it a highly rationalistic solution to deny the existence of evil." Dembski's contribution is to insist that "death" is a reasonable substitute for "evil", and the 20th century OEC theologians, in defending an old earth, are denying that death is evil.

Dembski critiques this "pro-death" compromise by first addressing the metaphysical necessity of "evil death", then epistemological primacy of text over science, then the metaphysics of causality. Since metaphysics and epistemology are somewhat interconvertible, we can rephrase that first goal as the epistemological primacy of experience, of science, within the Godhead itself. Then this book is his attempt to present twin epistemological goals: proving that science trumps text in the Godhead, but that text trumps science in revelation; that Biblical theology remains rooted in scientific realism, but that the Bible remains textually inerrant in the face of scientific challenge.

The Primacy of Science in the Godhead

His chapter 1, introducing the theology of the Cross, argues that two kinds of knowledge exist: description and acquaintance (p 19). We interpret them as knowledge about something conveyed by texts, and intimate knowledge learned by observation: textual vs scientific knowledge. Dembski immediately confounds us by insisting that scientific knowledge trumps textual knowledge, that Jesus had to suffer on the Cross to solve the problem of evil, aka "theodicy", he had to experience death to conquer death, because he could not solve the problem of evil by fiat, by attribution, by texts.

This will be a recurring theme throughout the book, where Dembski transforms the epistemological or the metaphysical into a moral problem. He will often insist that if we make the wrong choice in models--say, that Christ never had to suffer (science) because the Father could remove our sins by a fiat declaration (text)--then we suffer the consequences of a contradictory (text v. science) theodicy. To a certain extant, he was forced into this corner because YEC defend their view against OEC by using theodicy, arguing that the death required by an OEC from the beginning of the world 4.85 billion years ago makes the Fall insignificant, without explanation for how suffering entered the world.

To Dembski's credit, he takes this YEC assertion at face value, and suggests that if OEC can develop its own theodicy (science v. text), there is no particular advantage to the YEC (text v. science) position theologically, but many disadvantages scientifically. This is why the jacket blurb talks so much about theodicy and the problem of evil. As I said earlier, I find this to be a red herring. The new theodicy Dembski presents is really a metaphysical not a theological novelty, and its major impact is on epistemology, not on morality, because Dembski did not want a new morality or a new legal defence for axe-murderers, but he wants to be squarely ensconced in the fortress of orthodoxy. So what he is producing is just a new path to an old destination, and that new path is his novel metaphysics.

Coming back to his method, he argues that for God, scientific supersedes textual knowledge, whereas for humans, God's knowledge revealed textually supersedes man's knowledge gained scientifically (p20).  That is, man has gotten himself into such a mess that his scientific knowledge is too contaminated, too noisy, too unreliable to extricate himself. However, God's uncontaminated scientific knowledge can be communicated textually (with implicit error-correction) such that it supersedes man's faulty observations. (This implies that in the absence of noise, science trumps text, but in noisy environments text trumps science. We'll come back to this implication later.)

The next three chapters (2-4) trace the origin of this noise, these faulty observations, to the Fall, which is principally of theological interest. Then in chapter 5, Dembski gives a brief introduction to the YEC position and its antiquity, followed by chapter 6 on its scientific inconsistencies. Despite much effort over the past 50 years since "The Genesis Flood" was published, there really hasn't been a successful resolution to the chronological inconsistencies of a 6000-year old earth, not least among them is that YEC have no trouble accepting scientific chronologies after Abraham, but baulk at all scientific chronologies before Abraham. Chapter 7 recounts several YEC attempts to resolve this conflict by hypothesizing some discontinuity in physics that would allow two different chronologies. None of them hold up to scrutiny, leaving the YEC playing his trump card--when God created the earth, he made it appear old, like Adam's assumed navel or the wine of Cana.

Here's where the metaphysics begins to show.

For the YEC takes some Biblical words--morning, evening, day--and interprets them from scientific observations. "Twenty-four hour days" is often insisted upon, despite the lack of a Hebrew word for "hour" in this text, or even the absence of the number "24" associated with "day" in all of Hebrew scripture. (Jesus, however, does define a day as 12 hours long.) Obviously, this refined definition is based on some recent observation, some recent scientific knowledge assumed by the YEC. Yet when the OEC attempts to use some recent scientific observation to appeal to, say, a million year-old rock, the YEC insist it is invalid. Why should one scientific observation--24 hours per day--be valid, and another--million year-old rock--be invalid? The YEC would say that the first is based on a correct view of reality, and the second on an incorrect view of reality, so that the truth about words depends on possessing the proper metaphysical reality.

But what then is that "proper metaphysical reality" based on? A "literal" reading of Scripture, the YEC replies.

And what makes one reading "literal" and another not? Agreement with observations, not fanciful like myths. If the text says that Moses' staff turned into a snake, then it is not literal to say it turned into a turtle, or to say Moses cleverly moved it around like a snake, or even to say the verb is properly translated "to speak like a snake".

So the construction/interpretation of a word must be observation, but the construction/interpretation of a sentence must not yield to observation?  How many words must one string together in revealed texts such that it ceases to be observationally validated and begins to validate observation? Is grammar uninspired while semantics is? Are periods the metaphysical converter from fallible science to infallible text? There's an entire philosophy or "hermeneutical" science of Biblical interpretation that is being subsumed into the word "literal," and it doesn't necessarily support the YEC theodicy.

Dembski describes this conundrum at the end of his chapter 7, where he uses the word "backdrop" to describe this assumed hermeneutics, this metaphysical model.
On the other hand, questioning the constancy of nature as a whole does not seem possible. For in the very act of questioning one must hold constant the backdrop against which the question is posed. Questioning nature's constancy in general would deny this backdrop and thus be self-defeating.
We take Dembski's criticism as arguing that text and science are recursively related through hermeneutics, so that changing the metaphysics for some science but not some texts becomes an unstable system. The positive feedback of the system rapidly drives the interpretations to a bimodal, saturated state, which is as much a post-modern critique of science, as it is a Dembski critique of YEC. That is, it is obvious where YEC get their hermeneutics, but where does OEC get theirs? From the methodological naturalism of Science? Certainly Dembski would object to this oft-made YEC claim. Yet if all these metaphysical models are recursively related to text, then no matter where we start, we will end up in a circularly reasoned, bimodal solution. Is there any way out?  Dembski says there is.

To set the stage for his metaphysical model, Dembski discusses in chapter 8 the two forms of God's revelation, Special (text) and General (science). Presenting the two antagonistic views of modernism (science v. text) and YEC (text v. science), he then defends a non-contradictory and complementary position. Here are the concluding paragraphs:
Except for preserving the face-value [literal] interpretation of certain Old Testament passages (like Psalm 93), nothing of theological importance is riding on geocentricism. The same cannot be said for a young earth. A young earth seems to be required to maintain a traditional understanding of the Fall. And yet a young earth clashes sharply with mainstream science. Christians, it seems, must therefore choose their poison. They can go with a young earth, thereby maintaining theological orthodoxy but committing scientific heresy; or they can go with an old earth, thereby committing theological heresy but maintaining scientific orthodoxy.
Are the Book of Scripture and the book of Nature therefore irreconcilable? No. As we will see, a traditional understanding of the Fall is tenable regardless of one's view of the age of the earth.
Let's unpack Dembski's view. In God's knowledge, science trumps text, but He conveys it to us in both manners--the two books of nature. In our noisy "fallen" world, text has the superior error-correction, so God's knowledge is conveyed best by revealed text, but the text cannot contradict the same information that is transmitted by revealed science. Therefore it is wrong to force text and science into conflict, because at a deeper level they must be in agreement. Accordingly, our task is to use the relatively noise-free revealed texts as a guide to clean up the noise in the created science, without making their differences essential or fundamental. YEC, in arguing for an "apparent age", is making these differences fundamental and essential, which is a big mistake because it is both unnecessary and distorts revelation.

Dembski doesn't quite put it this way, but invokes some psychologizing which obscures the point above, concluding his chapter 9 with, "A benevolent God will allow natural evil only as a last resort to remedy a still worse evil, not as an end in itself over which to glory", where I interpret his "remedy" as an action or science whereas his "an end in itself" is an attribution or text. Thus Dembski is once again insisting on the primacy of science over text in the Godhead, in the theology of the Church. But his primacy of science leaves him unprotected against the more liberal Christian interpretations which argue that if science/experience contradicts the revealed text, it is clearly the text that has to be modified.

That is, if extinct jellyfish are discovered fossilized in rock from the Pre-Cambrian 600 million years ago, then it is foolish for Dembski to insist that real death only came with the Fall some 599.99 million years later. Perhaps the "noise" of fallible science might account for 10% error in this scientific measurement, perhaps even 50% error in the K-Ar dating, but surely not all of it. Even allowing for the finitude of man and the degeneracy of his science, how are these real chronological discrepancies between the Bible and Science to be handled?  Surely it is the text that must be modified to accommodate these recent scientific findings, no?

But if one massages the text, say, by allowing a thousand years per day, or squared to get a million, or even cubed to get a billion,  one still has the problem that death and suffering preceded the Fall, which propagates through the analogy to Christ to suggest that the atonement was not a observational/ scientific event but an attributed/ textual event like attributed death. How can we modify the text without destroying our theology? Doesn't theology require an absolute commitment to text?

In part 3, starting with chapter 10, Dembski addresses this problem.

The Primacy of Text in Revelation

Having defended the view that experiential "scientific" knowledge is necessary and superior to textual in the Godhead, Dembski now has the opposite problem of demonstrating that textual knowledge is superior to scientific in human experience. He is forced to defend this position because so much of 20th century theology has either asserted the primacy of experiential human knowledge over textual revelation (modernism/ atheism), or has attempted to separate the two so thoroughly that in practice textual knowledge becomes a "set of measure zero", an oxymoron of emotional anti-facts (existentialism, "new theologians"). In response to this 20th century remythologizing, Dembski begins with some modernist communication theory that supports a basic Platonism, and moves into a Johannine theological reinterpretation of Stoic philosophy, and finishes off with an Existential critique of 20th century neo-Orthodoxy. It is impressive tour de force combining modernist science with ancient philosophy.

Answering the materialist critique of texts in chapter 10, which see texts as epiphenomena of a material world, mere references to material reality, Dembski introduces the concept of text as communication, using Claude Shannon's work at ATT to discuss signal, noise and error correction, which we have alluded to earlier. By casting the ancient dogma of revelation as a modern problem in communication theory, Dembski demonstrates both the relevance and the necessity of revelation. By drawing an analogy between communication and the Trinity, he argues that concept, message, and transmitter are functions of, respectively, the Father, the Son and the Holy Spirit, so that Shannon's information theory is directly applicable to the Bible as revealed truth. He then finds the gospel in the process of receiving the message, arguing that God must employ a (unspecified) method of error correction for us to reliably get the message.

Having learned this trinitarian argument for communication at seminary, I very much enjoyed this tenth chapter. However there is an ambiguity of mapping, since Dembski puts the Holy Spirit in the transmitter (2 Tim 3:16?) whereas I usually put the Holy Spirit in the error correction (1 Jn 3:19ff). Obviously then, the mapping is not so very precise, despite Dembski's assertion (p88),
None of the preceding analogies between information theory and the God-world relation is, I submit, strained. Quite the contrary, they match up precisely and capture the essence of Christian metaphysics.
But the philosophical purpose of this analogy functions to tie the science--the actions, the experiences of God--directly to the word of God. By making the Creation a word-event, and then making the word-event a member of the Godhead, the Logos, Dembski establishes the complementarity, the tri-unity of word and act and actor, of concept and message and transmitter. This is how he solves the dilemma of science versus text, not by making them equivalent (which loses their separate identities) nor by making them equal and potentially conflicting (Hegel's dialectic and Kant's dualism), but by invoking a third thing, a tertium quid, a tri-unity.

Now I am beginning to really like this chapter!

But Dembski surprises me again, introducing John Wheeler's metaphysics as the bridge between the science/ text duality and a trinity. Wheeler, who was somewhat heterodox in his theology, described his scientific life as divided into three periods: Everything is Particles (materialism), Everything is Fields (Dualist Copenhagen QM, Gnosticism), and Everything is Information (?!). This was captured by his memorable phrase "it from bit", where existence (material or QM) derived from information.  Here is Dembski's conclusion from the chapter,
Wheeler is in fact tracing a revolution in physics and in our understanding of the world generally. Other scientists are now likewise beginning to see information as the fundamental stuff underlying physical reality. Information is the rock-bottom of reality, providing the final bridge between science and theology.
In Chapter 11, Dembski fleshes out this new metaphysical object uncovered by physics, attempting to demonstrate how information is at the same time non-material and yet determinative of the material. Some of the weird world of quantum mechanics is used to bolster the reality of this new metaphysical construct, which Dembski then identifies with several theological entities: the "new bodies" and "new earth" of St John's Apocalypse. Very roughly, Dembski identifies this metaphysical entity "information" with the Biblical entity "soul", which, if the remainder of Wheeler's three-realities are also mapped, would identify "particles" with Biblical body, and "fields" with Biblical spirit.

(I think previous theologians have made the opposite choice, identifying the soul with some materially-connected "ectoplasm" and "spirit" with the information or will, which is proof again that these analogies are not unambiguous. I think my blogs also have demonstrated a naive enthusiasm in finding an escape from Christian dualism, so perhaps the importance of this chapter is simply that there is justification within science for a trinitarian metaphysic.)

In any case, in chapter 12 Dembski will now identify this tertium quid with the Logos (John 1:1ff), invoking the entirety of St John's theology in support of a new equilibrium between text and science. The information of Logos preceded Creation, and preceded the words of Creation. Thus in some sense, we have primacy of information over both text and science, though Dembski does not clearly differentiate information from text. Thus in practice, Dembski collapses this new-found trinity of information back into a duality of text preceding science in the Creation. Note how Johannine theology which identifies Logos with Christ, and Christ with God ("In the beginning was the Logos, and the Logos was with God...") nevertheless insists that the Christ is eternally begotten of the Father, and therefore secondary to the Father. So in the Godhead, text is second, but in the Creation, text is primary.

In chapter 13, Dembski turns to answer the existential critique, that existence precedes essence, that being precedes purpose, that ontology trumps teleology. From the existential view, the most important thing is, like Descartes doubting his own existence, that we are here, that all texts are mere rationalizing of this obvious fact making texts secondary, that purpose is in our head not in reality. (You can see how this is can fit nicely with the materialist critique, and is in fact, a response to materialism that "remythologizes" the texts.) That is, answering the materialist critique requires that Dembski demonstrate why text precedes science chronologically leading to scientific primacy, but answering the existential critique forces him to demonstrate why text precedes science philosophically leading to moral or theological primacy.

Dembski does this by claiming that the Creation was created by language for a purpose, and therefore being was designed to be in communion with God. Contrary to the existentialists, we along with the entire universe of stars and rocks and hoarfrost were created for a purpose, and that purpose was the communication of God's glory. Purpose precedes existence, and language reflects that purpose. Far from having text be an afterthought of rationalizing our existence, a philosophical derivative of our creation, language has a deep relation to the forethought of God, a Platonic relationship to ideas, to the Logos in the mind of God.

In Chapter 14, Dembski attempts to combine his two critiques of chapters 12 and 13, by arguing that creation involves two steps: a purely informational step, and an active, materially effective step, where the informational step chronologically precedes the materially effective step. From Aristotle's four causes, Dembski is arguing that the final cause precedes the material and formal causes. Why does this matter? Because later on he wants to argue that final causes are a-temporal, they don't have a timeline, they exist in the mind of God outside of time and temporal causality, unlike the material causes.

But before he can use this observation to solve his theodicy problem of the reason for suffering, he must first address the Kantian claim that final causes, or timeless information in the mind of God, doesn't actually do anything. That is, Kantians argue that if ideas moved particles, they would be material causes, not final causes, and thus it is irrelevant whether final causes precede material, or even if they exist at all, because we can never know them except through the material. Hence the real philosophical divide between thought and action, between noumena and phenomena.

In chapter 15, Dembski responds with some physics-based arguments that material causes are equivalent to forces, that forces are equivalent to energy (since force * distance = energy), so any cause that does not change the energy is in fact, a non-material cause. Finally, using the weird world of quantum mechanics, he demonstrates effects caused by information that are non-material.  (The quantum eraser experiment, for example.)  He goes on to argue that most of the objections to "miracles" are based on the assertion that a causally-closed system is both necessary and observed by science, but that such a closed system appears to be fixed and determined with no room for choice or morals. However by allowing the system to be susceptible to final causes, to be informationally open, one can obtain decreases in entropy without violating any material or physical laws, and thereby solve both the mystery of free will and the apparent violation of the law of entropy in one blow.

Not everything in his proof is beyond dispute, there are those that would argue information is related to energy, or else memory states would be disrupted by thermal fluctuations. Therefore information flow is energy flow, and Dembski cannot quite avoid the criticism that he has merely relabelled a material cause. In a recent and controversial paper, a theorist argues that the entropy of space-time information is what generates forces, which would likewise prevent the Aristotelian distinction between material and final causes. In the end, the debate probably hangs on the scientific definition of information, but it seems unlikely that Dembski has solved the problem of free-will and determinism.

In conclusion, Dembski has argued for the primacy of text over science in the Creation, that purpose precedes existence, that plan precedes action, that information is the critical ingredient of design with matter and material causes being secondary. Thus we can trust the Bible over the science of Creation, without making the metaphysical mistake of positing the primacy of text in the Godhead itself.

This has been a thoroughly theoretical discussion, yet has not addressed the root of the theodicy problem formulated at the end of chapter 9: if suffering is a consequence of man's Fall, then why does science describe death before the Fall?

The Parallel Universes of Revelation and Science

Dembski's moral dilemma is how to keep science primary in the Godhead, while keeping text primary in Creation. These two objectives collide in the Genesis account, where the science of God meets the texts of Creation. Which one wins?

I am reminded of Numbers 9, where the Israelites point out a conflict of two of Moses' commandments: to keep the Passover, and the uncleanness acquired from burying the dead before sunset. My first solution would have been to prioritize the Mosaic commands, and allow a greater to trump a lesser. But Moses doesn't do that. Instead he petitions God, who invents a third rule to handle conflicts--they can take a 30-day delay on celebrating Passover, but only on conditions of duress. So also, when text and science conflict, one would expect God to provide a third way of triangulation. Dembski does not disappoint.

In three short chapters, 16-18, Dembski defines the terms he is going to use to reconcile this epistemological conflict. Using chapter 15 as a springboard, Dembski argues that material causes are chronologically causal--if A causes B, then A preceded B chronologically--but in contrast, non-material causes (final causes) are a-temporal--if A is the purpose of B, A neither precedes nor follows B, since both are conceived together. This leads Dembski to posit an orthogonal axis (think vertical versus horizontal time axis) for purposeful relations. Whether A is the purpose of B, or B the purpose of A relate to their position vertically, whereas their chronology is determined by their horizontal relationship. Dembski employs some questionable Greek etymology to label this vertical axis "kairos" and the horizontal axis "chronos", but the significance is that they are independent quantities. He then defends this independence of the two measures by discussing Newcomb's paradox, where knowledge of the future can affect the present, reversing the usual material causality between past and present. (Of course, the paradox is how precisely kairos affects chronos if they are really independent, a matter Dembski seems to skirt.)

With this textual machinery now in place, he argues in chapter 18 that the material causality of evil death in chronos could be a consequence of the Fall in kairos.  That God, knowing the future perfectly, anticipated (kairos) the Fall of man and the spread of evil, by creating (chronos) the death that afflicted dinosaurs and jellyfish of the old Earth. Here's his concluding paragraph,
In summary, God has employed both the intentional-semantic [kairos, textual] and the causal-temporal [chronos, science] logic in creating the world. The essence of evil is to assert the all-sufficiency of the causal-temporal [science] logic, allowing it to swallow up the intentional-semantic [text] logic. To understand creation aright, we need to understand how both these logics figure into creation. Specifically, we need to understand how the order of creation (which follows the intentional-semantic logic) relates to natural history (which follows the causal-temporal logic). Young-earth creationism attempts to make natural history match up with the order of creation point for point. By contrast, divine anticipation--the ability of God to act upon events before they happen--suggests that natural history need not match up so precisely with the order of creation and that the two logics of creation can proceed on independent, though complementary, tracks.
Now this idea of orthogonality is not new, in fact, it bears a striking resemblance to Kant's separation of the noumenal and the phenomenal, but what is new is that Dembski allows them to affect each other. This also is not entirely new, because Hegel saw in these dualities an infinite conflict, where thesis is opposed by antithesis leading to a synthesis that in time also generates an anti(syn)thesis and so on to eternity. So in chapter 19, Dembski responds to the criticism that he is merely channelling Hegel, by claiming that this interaction between kairos and chronos converges at infinity, that God who knows the end of all things, also knows the purpose of all things, so there is ultimately a divine theistic unity. This, Dembski argues, solves the Kantian problem of transcendence, as well as the existential problem of immanence. (The usual theological resolution finds this unity in the God-man, Christ, so Dembski's solution might be viewed as an alternative Christology, or perhaps, a christological ontology of the Father. Another reason for arguing Creation from a Trinitarian perspective.)

Finally in chapter 20 with all of his machinery in place, Dembski addresses the Genesis 1-3 account in the longest chapter of the book, cutting straight to the chase in the first paragraph:
We are now in a position to offer a reading of Genesis 1-3 that reconciles a traditional understanding of the Fall (which traces all evil in the world to human sin) with a mainstream understanding of geology and cosmology (which regards the earth and universe as billions of years old, and therefore makes natural evil [death] predate humanity). The key to this reading is to interpret the days of creation as natural divisions in the intentional-semantic [text] logic of creation. Genesis 1 is therefore not to be interpreted as ordinary chronological time (chronos) but rather as time from the vantage of God's purposes (kairos).
Accordingly, the days of creation are neither exact 24-hour days (as in young-earth creationism) nor epochs in natural history (as in old-earth creationism) nor even a literary device (as in the literary-framework theory). Rather, they are actual (literal!) episodes in the divine creative activity. They represent key divisions in the divine order of creation, with one episode building logically on its predecessor. As a consequence, their description as chronological days falls under the common scriptural practice of employing physical realities to illuminate spiritual truths (cf. John 3:12).
Despite assurances that this is going to be a really different solution, it comes out looking a lot like the despised "framework" model. Claiming that the language is not "exact" sounds a lot like saying it isn't literal, and claiming that the language "employs physical reality to illuminate spiritual truth" sounds a lot like "literary". So other than making "literary" more precise by employing all sorts of philosophical distinctions, is there any positive, constructive difference from the mainstream OEC interpretation?

Dembski claims there is. If Genesis 1 is to be taken kairologically, there is still information in the ordering of kairos--one of those many unspecified interactions between chronos and kairos. Here's the opening to a paragraph that reconstructs the framework hypothesis:
A kairological interpretation of the creation days in Genesis now proceeds as follows: On the first day the most basic form of energy is created: light. With all matter and energy ultimately convertible to and from light, day one describes the beginning of physical reality. With the backdrop of physical reality in place, God devotes days two and three to ordering the earth so that it will provide a suitable home for humanity.
Which is to say, the kairological perspective permits violence to the chronological ordering. But what Dembski really wants, is an answer to the problem of death, and the origin of evil. He finds kairos can solve that too,
To understand how the Fall occurs chronologically and how God nonetheless allows natural evils [death] to rage before it, we need to take seriously that the drama of the Fall unfolds in a segregated area. Genesis 2:8 refers to this area as a garden planted by God (i.e., the Garden of Eden)....But in fact, [because critics argue Gen 2 is a kludged retelling of Gen 1] the second creation account, depicting the Garden, is just what's needed for kairos and chronos to converge in the Fall. It constitutes the "second creation" in the sense of chapter 14 of this book [final vs material causes]...In the Garden of Eden, Adam and Eve simultaneously inhabit two worlds. Two worlds intersect in the Garden. In the one world, the world God originally intended, the Garden is part of a larger world that is perfect and includes no natural evils. In the other world, the world that became corrupt through natural evils that God brought about in anticipation of the Fall, the Garden is a safe haven that in the conscious experience of Adam and Eve (i.e. phenomenologically) matches up exactly with their conscious experience in the perfect world, the one that God originally intended.
So finally we observe how Dembski's metaphysical furniture solve the problem. We define two kinds causation, and from that construct two kinds of time, justifying an orthogonal, Kantian sort of separation. But we allow some interaction too, lest we end up with Barth's transcendence dilemma which excluded science from text. With two possibilities of time to choose from, we use whichever definition is convenient to interpret the text (Gen 1), and when paradoxes arise, we use them both at the same time (Gen 2).

Does this kairological solution address the science of evolution? Dembski spends chapter 21 arguing that kairos is compatible with non-Darwinian evolution, e.g. theistically-directed change over time. He writes,
Does evolution therefore undermine the theodicy I am proposing? Not at all...On this view evolution is not so much a method of creation (though it can be that also) as a method of judgment by which God impresses on the world the radical consequences of human sin.
So practically speaking, Dembski's solution is indistinguishable from the mainstream OEC. It discards literal chronology, it retains the empirical truth claims of evolution (minus the metaphysical materialism), and it allows some theological textual overlay to exist without negating the observational science below. In some respects it is more radical than OEC, because it allows the ordering of Genesis 1 to be non-chronological, and even more radically than YEC, allowing theology to be a-causal. Note carefully the differences between YEC, OEC and Dembski: YEC demands science to be a-causal to solve a chronological problem with text; OEC demands text be non-chronological to solve a causal problem for science; and Dembski demands that text be both a-causal and non-chronological, so as to leave science both causal and chronological.

That is, a surface reading of Dembski's solution looks as if text and science are made compatible by making them complementary--very much like Kant's solution. And like Kant's solution, it suffers from all the ills of a dualist metaphysics. Unlike Kant, however, the exact relationship between chronos and kairos, between material and final causes, between science and text is left unspecified, so only time will tell if this model evolves along the Kantian path, or takes a turn toward trinitarianism. In any case, it is unlikely if this theodicy will be convince YEC advocates to abandon their hermeneutical method for one that is still so undefined.

In the next blog, I will offer a constructive critique of Dembski's view, suggesting alternative ways to achieve the same ends that will perhaps avoid some of the common pitfalls of a dualist metaphysic.
Email It | Print It | Trackbacks (0) | Flag as Offensive

Benford's Law

Here is an observation which many people find unexpected. If you look at a telephone book, the first 3 digits are the area code which may either not vary or be limited to a few choices. The next three digits, at least in the good old days, were restricted to neighborhoods. When I was a kid, there was even a verbal mnemonic for the first 3 digits that attempted to help you remember your number.  But the last 4 digits were essentially arbitrary, especially the last digit. This is an example of putting the most important digit first and the least important last, where if you mess up the last digit, you will get a neighbor, whereas if you mess up the first digit, you may get Moscow or Timbuktu.

In the early days of computer engineering, there were multiple methods of storing numbers in the bits of the computer memory, some that were "least significant bit" (LSB) first, and those where LSB last, the EBCDIC versus ASCII wars. But I digress. In non-computer use, we use the decimal system, so we are discussing the least significant digit (LSD).

The key observation is that if one takes the LSD of a measurement, it should look pretty random. Suppose you are a dissident in Iran, and the results of the recent election are posted, precinct by precinct. If you collect all those LSD from different precincts, it should look essentially random, with the number "2" just as frequent as the number "7" for example. Here's the result from the 2009 elections:
What is going on? Well it turns out that if you ask people to generate a series of random numbers, as in the green bars listed as "Our random test",  they like "7" as "more random" than others, whereas they dislike "5".

This is well-known to investigators that are wondering if a scientific result is forged. David Baltimore was a Nobel-prize-wining, world-respected biologist who had a post-doc in his lab, Thereza Imanishi-Kari claim some amazing results. When her lab notebook data-tapes were examined by the Secret Service forensics lab, they noted that the LSD was not random, but showed the same characteristics as the Iranian election. Baltimore lost his job as director of the Rockefeller Institute over that flap, and all of us graduate students got a glossy brochure on "scientific ethics".

Why do people find it so hard to manufacture a truly random string of numbers? Perhaps because all their senses--hearing, seeing, tasting, touching, smelling--are all wired logarithmically. Something that sounds twice as loud, is actually about 10 times as loud, so something twice twice is about 100 times as loud. The same is true of brightness in vision, for strength of odors and so on. This is awfully convenient, because it gives each sensation a wider range of responses. If we didn't see things logarithmically, we couldn't see in the dark as well as the beach, we couldn't hear a pin drop and also a rock concert, we couldn't smell the bouquet of a white wine. So since none of the human senses are scaled linearly, when they try to invent linear randomness, it is not too surprising that they fail so miserably.

Both of these discussions have assumed that the LSD should be evenly distributed among 0's, 1's, 2's ...9's. But what about the leading digit, the Most Significant Digit (MSD)?

We can test this very easily by running digital data sets through a simple filter. Yup, the phone book MSD is evenly distributed. So is the lottery (as you might expect it to be checked often.) But not a data set on the area of lakes. Or for that matter, lengths of rivers, stock market indices, or file sizes in your computer. Instead, "1", "2" and "3" occur just about as often as "4" to "9" combined. Frank Benford described this in 1938, and it is now known as Benford's Law.

How does this come about? Well it has to do with the nature of the data. Let's take the population of cities as an example. Seventy-three of the largest population cities are listed in wikipedia, and we examine the distribution of 1-3 versus 4-9 MSD. A random distribution would predict 33% are in the first class, and 66% in the second. We found instead 42%. Well perhaps this is just noisy data since 73 is not such a large number and we'd expect square-root(73)=8.5/73 = 11% slop in the numbers. So lets use instead the populations of 234 countries of the world. With a random noise of 6%, we expect our 1-3/all statistic to be no greater than 39% but instead we found 54%. The deviation increased when we enlarged our data set!

This is not looking good. Let's try one more time, we'll take the population of all cities arranged by size, and cut off the chart when the initial digit cycles around. Since the largest city is Tokyo at 32 million, we start with the 2nd largest city of Canton at 24 million, and cut off at the 3 million mark, thereby sampling each MSD equally. This gives us a total of 130 cities, an error of 9% but a 1-3/all statistic of 51%. Still much too high! If we take instead the largest 500 cities, we get a statistic of 87%! What is going on?

An examination of our data base reveals that cities are distributed unequally, with the best fit being to a power law: City Population = 104.9 million * (rank ^ -0.74) with an amazing correlation coefficient of R^2=0.982. That means the population of cities is not evenly distributed between some lower and upper limit, it is not a bell-shaped "gaussian" curve around some "most likely size", but it is a curved "power-law". If we use log-log paper to plot our populations, this curve would be straightened so as to fall on a straight line with slope -.74. We have a "fat-tail" distribution, with most of the population in the small cities, not in the mega-cities.

The importance of this power-law, is that the "bins" are not evenly spaced. On log paper, the "1", "2" and "3" bins occupy as much space as the "4" to "9" bins. (Mathematically log(3) = 0.477) So if the quantity we are looking at has a power-law distribution, then its MSD will be logarithmically spaced. Conversely, if MSD is random, then it is evenly spaced on linear graph paper.

Smart ATT engineers made the phone book, and didn't want to run out of phone numbers by logarithmically spacing the numbers, so of course it is distributed randomly into linear bins. But why are city populations different? We could try to explain it by making a city model using diffusion-limited growth, like salt crystals forming on a cookie sheet of sea water left in the sun, but the important assumption is that there are non-linear factors. An initial collection of houses attracts more people, so that the big get bigger, and the small not so much. A science historian studying the attributions of discoveries to famous scientists called it the Matthew Principle, where Jesus said "to those who have will more be given, but to those who have not, even what they have will be taken away."

Many physical phenomena involve growth, and growth involves the Matthew principle, and hence the logarithmic distribution.

All very neat and tidy, but what about physical constants? They don't grow. So why should Benford's Law be true of, say, spectroscopic lines, or energy levels?

Shao and Ma of Beijing University looked at the statistical physics of bosons and fermions, where bosons are things that have zero or whole-integer spin, such as photons or most atoms, whereas fermions have half-integer spin, such as electrons and some ions. They discovered that Benford's Law applies to bosons exactly, but fermions fluctuate around a value (gaussian statistics) as a function of temperature.

What does this mean? It means that the fundamental nature of reality, the statistics that apply to atoms and photons, are distributed logarithmically. Just like human senses.

My first thought was "well what do you expect if God's senses are also logarithmic?" After all, if Jesus was the God-man, and men are wired logarithmically, you'd expect at least a semi-log distribution.

But what does it mean to say that God is humanly wired? Perhaps it is us that have wired our physics backwards? The opposite of logarithmic is exponential, and perhaps physics has glommed onto exponential quantities as being basic building blocks, making all the otherwise random statistics appear logarithmic? For example, why should space be linear, or time linear? We remember our childhood so much better than what happened yesterday, because we live our life in a logarithmic fashion. If the natural progression of time is logarithmic, then distributions that depend on time should also be logarithmic. All our clocks and calendars are drawn improperly, because the boxes on our organizers should get smaller in the future, and larger in the past.

Perhaps Bishop Berkeley is right, that reality is what is observed. So the more people there are in the world, the more is observed, and the passage of time is speeding up. Theology is speeding up. Tipler's Omega Point is being approached exponentially, making the infinite finite.

Maybe it is us physicists that are the bosons.
Email It | Print It | Trackbacks (0) | Flag as Offensive

The infinite life

I just read a most intriguing book review of Naming Infinity: A True Story of Religious Mysticism And Mathematical Creativity, in of all places, The New Republic, entitled the Infinite Life. It describes how the mathematical idea of infinity was  poorly understood by the ancient Greeks (though a recent palimpset has demonstrated that Archimedes had a very good knowledge of infinity even if Anaximander didn't), and even Galileo could not understand how the countable integers (1,2,3...) were just as infinite as the even integers (2,4,6...). Georg Cantor attempted to plumb the depths of infinity and died in an insane asylum, unable to prove his "Continuum Hypothesis". This book describes how religious mysticism came to the aid of the Russian mathematicians who cracked the code of infinity.

So I resonate with this book on so many levels--math, infinity, religion, mysticism--but my favorite sentence from the book review is this one:
Just as naming God via glossolalian repetition was a religious act that brought the deity into existence, so naming sets via increasingly recursive definitions was a mathematical act that conferred a reality in the world of numbers. Cantor and before him the ancient Neoplatonists had shown the way, but this was only the beginning.
For Plato describes the shadows on the wall as 2-D projections of a larger 3-D world outside the cave. He also describes the method of knowledge gained by denying the false: apophatic theology. Both of these approaches, shadows and denials, are descriptions by projection, by limitations, by approximation. That is, the five blind Indians are feeling an elephant, one says "an elephant is like a wall" and another says "an elephant is like a rope" and so on. Each of these blind descriptions is a limited observation, a projection of the real elephant. So by adding up the experience of these 5 blind men, as well as the 5000 non-blind scientists that have studied elephants, we can "understand" what an elephant is. But the Platonic method is additive, it is linear. And a linear approach to infinity takes an infinite amount of time.

For each discovery, say, by transcribing the genome of the elephant, adds another bit of information to the growing library of "elephant knowledge". Since the library is growing, it is a limiting curve, where k=knowledge, k = (1-exp(t)). Or to say it another way, dk/k --> 0, or log(k) = c.

A recursive method of knowledge, however, produces exponential information. Thus the approach to total knowledge is now linear, which is to say, an exponential approach to infinity takes a finite amount of time. That is, if we plot the log(k), we will get a positive-slope line for our advancing knowledge. What would recursive information look like? Here's an attempt. Suppose we make a nanobot some 2 microns in length, which can communicate position, temperature, pressure, and chemical equilibrium every 100 ms through a terraherz transmitter shaped like a single crystal of hematite that responds to say, a MRI magnetic gradient. Suppose now that his nanobot can duplicate itself every 20 minutes, and we inject it into the elephant. Soon it is dispersed throughout the bloodstream (a convenient source for the iron in the hematite) and our information about the elephant grows exponentially as the transmitters multiply. That is a recursive information source.

"Oh, but that is data!" one might object, "that isn't information." Okay, lets put some rudimentary memory in those transmitters, and maybe some logic processing. The hematite crystals are divided into domains, and there's a surrounding membrane that keeps a positive proton gradient around it, so as to allow simple logic gates to work. Now all those nanobots can be instructed to move one direction or another, to collaborate or independently move, to replicate or stop replicating. In other words, the nanobots are a distributed information processing system with around a trillion or so nodes, capable of quite sophisticated behavior, which can be directed through the MRI gradients in my monstrous supercooled magnet. Is that information now?

The power of recursion. Driving some men insane, and others religious. Dividing the world into two pieces.

Recursion.
Email It | Print It | Trackbacks (0) | Flag as Offensive

Review of The End of Christianity, part 2

To recap part 1, there are two competing views of how to treat the truth claims of Genesis 1 and science: text trumps taste/touch/sight; and text complements (as in, irrelevant to) taste/touch/sight. In philosophy jargon, these are two competing epistemologies: 1) YEC believe in trumping, where hearing is privileged over other senses; whereas, 2) OEC believe in complementing, where hearing is (mostly) irrelevant to the other senses.

I'm stereotyping, of course, because it isn't just hearing the surf crash on a shingled beach, it is hearing the revealed words of God, it is language, it is texts that are privileged over the other senses. In response to the Lockean or Humean critics, ears experience just as much distortion as the other senses, so there is no special faculty or brain wiring that makes hearing more reliable than sight, but rather it is the communication channel of language that is privileged over taste/touch/sight. In computerese, we say that communication carries its own error-correction algorithm, so that language is privileged over raw senses because it is a potentially error-free information channel. Inasmuch as sight is used to read texts, it is privileged as well, it is just that we learn to read after we learn to talk, so "hearing" is shorthand (metonymy or synecdoche in literary jargon) for this whole higher level of communication.

But if we are going to treat "hearing" as metonymy for language, shouldn't we in fairness do the same for "taste/touch/sight"? It is not that vision is error free, but once that vision is converted into a model of reality, do we not have the same "error-correction"? Suppose that I am driving a car, and think I see a child running across the road in front of me. I could doubt my eyes, since this is a rare occurrence, or look again, or turn on the high beams, or any number of error-correction methods until I was certain of what I saw. Nearly simultaneously, however, I am constructing a mental model whether my trajectory and the child's trajectory will intersect, whether braking will change that intersection, or whether a sharp tug on the wheel would be effective. From a physicists viewpoint, I am calculating velocity vectors, acceleration vectors, coefficient of static friction, vehicle inertia and the likelihood of impulse related injuries to soft bodies. We all do Newtonian physics, it is just that some of us do it calmly with a pencil.

So just as communication is a complexity level above hearing, is not model prediction a complexity level above sight? Just as communication has its built in error-correction code, does not model prediction have its own error-correction code?  To make this more pertinent to our discussion, let's categorize all this "mental modelling" by the word "science". Then should textual communication be privileged over experiential science if they are both higher level brain functions?

Translating this into theology, should we be like YEC and privilege text over science, or should we be like OEC and treat them equivalently? Let's go back to the driving example. Suppose that you see the child in the road and calculate that the trajectories do not intersect, but to be safe you take your foot off the gas. Simultaneously your wife, riding in the passenger seat, shouts "Stop!" Does the text take precedence over the science? What if instead of your wife, it was the police officer administering driving exams? What if it were your teenager that just got her license? What if instead of a car you were driving an 18-wheeler full of eggs? What if it were a bicycle with no back brakes?

If you are like me, these scenarios display a complex interaction between hearing and seeing, between text and model, between authority and reality, between word and deed. The more authoritative the speaker, the more inclined I am to privilege communication, but on the other hand, the more extreme the science, the more likely I am to process the text through a "reality" filter of model prediction. For example, if I am driving an 18-wheeler full of eggs, and I know intuitively that I won't be able to stop in time to avoid hitting the child, I may intepret "Stop!" as a directive to steer the vehicle away from the impending collision.

This "reality filter" is what philosophers call "metaphysics", and it reflects the implicit mental model that enables me to make predictions. In the above example it is my belief in something called "inertia" that tells me I can't stop an 18-wheeler on a dime. Notice how "inertia" is not something I can taste, touch or see, but it is nevertheless part of my mental furniture. In like manner we have an entire mental mansion full of metaphysical constructs that enable us to make predictions and otherwise behave as humans instead of apes.

Therefore Dembski's solution to the YEC/OEC epistemological conflict is to construct some newish metaphysical furniture. (The jacket blurb claims that the book's thesis is a new theodicy, but if you read the foreword, you will know this is just philosophical cover for a metaphysical OEC defense.) But before we try out the new love seat in the foyer, we should test the comfort of the traditional furniture we are replacing. Like Goldilocks, we should probably save baby bear's bed for last. Accordingly, let us look at two traditional positions on the YEC/OEC conflict.

OEC in The Presbyterian Church in America (PCA)

In the May/June 2010 issue of Modern Reformation, eight geologists who are members of the PCA denomination, defend an OEC interpretation of Genesis, presumably because their education/career depends on it, and the PCA denomination has been entertaining motions to declare itself YEC. In their defence, they point out that it is not heretical to be OEC,
How old is the earth? Does an honest reading of the opening chapters of Genesis confine creation to six days a few thousand years ago, or does it allow for an origin of much greater antiquity? These questions are hardly new. Scientific assertions suggesting an alternate interpretation of the length of creation began more than 200 years ago, well before the days of Charles Darwin. With a debate more than two centuries in the making, one might reasonably expect that Reformed scholars long ago resolved the issue. In fact, the much-sought resolution has proven elusive. In 1998, the Presbyterian Church of America (PCA) commissioned a Creation Study Committee (CSC), made up of both Bible scholars and natural scientists, to consider the relevant Scriptures in light of the various existing interpretations and scientific evidence. The report, submitted after two years of investigation, did not recommend a definitive answer, but did at least conclude that it is possible to believe both in an ancient earth and the inerrancy of Scripture.
They then quote this report's conclusions, which makes allusion to the Reformation controversy over geocentric vs heliocentric astronomy.
Ultimately, the heliocentric view won out over the geocentric view because of a vast preponderance of facts favoring it based on increasingly sophisticated observations through ever improving telescopes used by thousands of astronomers over hundreds of years. Likewise, in the present controversy, a large number of observations over a long period of time will likely be the telling factor.
In other words, this 2000 document suggests that the weight of evidence, or inductive logic will eventually resolve the YEC/OEC debate. These eight geologists beg to differ, suggesting that if weight alone would tip the scales, there would be no present debate. They also imply that the document's analysis of the Reformation debate was similarly flawed, but the meat of their argument lies in their epistemological defence of OEC's superior inductive support.

Summarizing their arguments, we have:
1) OE holds widespread scientific consensus--e.g. more noses;
2) OE is pragmatically more effective at making money--utilitarian;
3) OEC vs YEC is mischaracterized as atheistic materialism vs theistic supernaturalism--false dichotomy;
4) #YEC/#OEC in the PCA is portrayed as 50/50 when in fact among practicing PCA geologists it is very close to 0/100.
5) YEC is presented as an alternate science, when in fact it is an alternate scriptural hermeneutic.
6) Inductive evidence of >6000 year age support OE, with the following selected examples:
i) Varves are light/dark sediments deposited yearly on lake beds, which like tree-rings, can be counted up into the 100,000's, clearly more than a ~10,000 year old Earth.
ii) Plate tectonics (spreading lava at the mid-ocean ridges) is in agreement with radioactive dating is in agreement with GPS velocity measurements is in agreement with present geography that the continents have been separating for millions of years.
iii) All these geochronometers--radiocarbon, radioisotope, varves, tree-rings, plate tectonics, GPS--form a consistent cross-calibrated data set supporting a >10,000 year old earth, which implies that finding any particular error does not invalidate the whole.
7) Any conflict of chronology with scripture would constitute a disagreement between general and special revelation, which is inconsistent with theological notions of the nature of God. E.g., a God who made the Earth appear to be older than it "really" is, would not be the God of the Bible.
8) A Kantian split between YEC theology and OEC science is not a PCA option because it damages vocatio (our work in the world) and evangelium (our witness to the world).

YEC in the PCA

In response to this hoary argumentation (I first heard some 30 years ago from my college geology roommate,) John Reed, a YEC geologist and PCA elder (refuting the statistics above) responded with this web post intending to "demonstrate to laymen that their arguments are neither based on proper authority nor are scientifically compelling." He begins,
The only infallible source of truth is God’s word, and the derivative system of theology for the PCA is found in the Westminster standards. Thus, we must test the argument of these authors against both. The Westminster Confession of Faith (WCF) summarizes numerous Scriptures..."in the space of six days, and all very good."
Reed then goes on to argue that there is no evidence for any OEC views among the writers of the WCF, and indeed a 16 century consensus of Church theology for a YEC position. Reed complains that these PCA geologists never bothered to interact with the Bible or theology, since "The authors do not argue Scriptural or confessional support for their position, apart from a generic reference to the value of general revelation."

After making some sweeping conclusions about the unorthodoxy of OEC (contradicting the earlier PCA report, and in fact, denying the validity and orthodoxy of the earlier PCA report), Reed then launches into a point by point rebuttal, which we selectively edit to those points which are constructive responses to the items above. But it should be clear from this introduction, that Reed's epistemology alternates between a desire to claim universal support and a desire to advocate individual freedom, alternating between a textual absolutist and an inductive skeptic. As we say in debate class, it is a shotgun defence, which is not to be praised for its brilliance but for its tenacity, not for its coherence but for its completeness. If we permit OEC to argue from consensus or preponderance of examples, then we cannot criticize a YEC defence consisting of great quantities of qualms. Here are a few of his points:

1' Other scientific consensus viewpoints have fallen, so why is this one special when it clearly contradicts scripture? (e.g. only revealed texts can give absolute truth, science is only relative.)
2' Pragmatically YE is indistinguishable from OE, since description is independent of history.
3' Lots of anecdotes prove it is a true dichotomy, not a false one. One cannot be OEC without also becoming atheist, no matter what they say.
4'  Numbers are irrelevant to truth. e.g. PCA should not decide this issue by taking a vote.
5' It is insulting to claim hermeneutics poisons science, YEC scientists can still do fine work.
6' Reality and induction don't have to coincide, only texts can convey reality.
i') Varves have been misinterpreted in the past, therefore this example is not conclusive. (Many other anecdotes of mistakes being made, which presumably invalidates all these scientific endeavors.) Furthermore, one can imagine varve-like scenarios undermining current consensus which induction cannot exclude. e.g. induction cannot reliably tell us details of the past. In fact, since all induction depends on uniformity of physical laws, an unprovable assumption, no induction can achieve certainty.
ii') Plate tectonics is likewise marred with error, and therefore cannot be proven certain.
iii') Consistency between geochronometers is a psychological phenomenon, not a real one. Errors are errors, and averaging them or taking new data doesn't make them go away. Only revealed texts are error-free.
7) The chronology isn't certain enough to conflict with certain scripture. Apparent age is not evil, for example, wine at Cana.  Chronological conflict might be all Satan's fault. And God can do whatever he likes regardless of human theological reservations, which are merely faulty interpretations anyway.
8) To say YEC destroys vocatio insults YEC scientists, and to say YEC undermines evangelium denies the offence of the gospel. It is supposed to be offensive.

There you have the debate, first in the original documents and now in these convenient bullets. What have we learned about the metaphysical furniture? First, that OEC geologists really are not very concerned with texts at all, and are willing to oppose 16 centuries of textual consensus with a few scattered examples, indicating a rather inductive view of hermeneutics, whereas YEC proponents are not really very concerned with science at all, and are willing to oppose thousands of peer-reviewed articles with a few scattered examples which never change despite continuous scientific progress, indicating a rather deductive view of science. Second, OEC geologists seem to place serious theological significance on truths revealed by inductive science, while trivializing textual discrepancies, whereas YEC proponents seem to place serious theological significance on hermeneutical truths while trivializing scientific discrepancies. Thirdly, OEC geologists do not seem to moderate their theories with qualifying adverbs "possibly" "may" "suggest" despite their adherence to a "scientific method", whereas YEC proponents do not moderate their exegesis with qualifying adverbs despite their adherence to original sin and a degenerate intellect. Fourthly, while advocating completely opposite epistemologies, neither side is engaging the metaphysics of the other.

We could go on with these observations, but I think I've made my most important point: we have a duality of opposing views of truth, which in turn, are based on opposing views of reality. We have epistemological dualism reinforcing a metaphysical dualism. In whimsical terms we have a choice between daddy bear's rigid, straight-backed chair served with strawberries-and-cream, or mommy bear's overstuffed comfortable chair served with broccoli and spinach. The rigidity of truth in the YEC perspective is tempered with the perspicuous ease with which texts can be consumed, while the comfortability of truth in the OEC perspective is tempered with the difficulty of digesting it.

It is my contention that dualities always degenerate into thesis and antithesis. If you xerox a document enough times, it becomes black and white. The only stable positions of a seesaw are at the extremes of its motion. A man cannot serve two masters. When truth loops back on itself, as when an epistemological argument is presented, it cannot maintain a balance but the feedback drives it to its extremes.

Nor is Hegel correct in postulating or searching for a synthesis that converts two back into one, any more than the Greeks were able to find a solution to the problem of the one and the many. For there is no information in the One, anymore than there can be Hindu science or Communist elections. Truth has no significance if there be not falsehood also. Yet as soon as we elaborate a way to distinguish truth from falsehood, an epistemology, we have completed the feedback loop that will generate extremes.

Therefore this debate was inevitable as soon as Augustine identified two kinds of revelation: Special and General. For the number two is a scary number, a number that divides, that contradicts, that cannot be tamed. If there be any resolution to this ancient debate, we must go back to the Trinitarian controversies of the first 5 centuries, when the accursed number was banished from theology. But before we make the plunge into the iota of difference, let us first consider Dembski's new furniture.

And not surprisingly, given his triple degrees in Math, Philosophy and Theology, he gets right to the point.
To be continued...
Email It | Print It | Trackbacks (0) | Flag as Offensive

Review of The End of Christianity, Part 1

In the past few years, there has been a changing attitude toward Genesis 1 among evangelicals, preferring to take it as 6-day, 6000 BC event, aka Young-Earth-Creationism (YEC). The principle argument that has converted many devout Christians is that YEC is the interpretation of Gen 1 common to both Church Fathers and Reformation Reformers. Whether Catholic, Orthodox or Reformed, it is safe to say that the founders were YEC, whereas Old-Earth Creationism (OEC) didn't arise until after the Enlightenment as a product of increasingly apostate geologists. If we evaluate a theory principally by the people who have advocated it, YEC is the hands down, no-brainer, ad hominem winner.

In his 2009 book, The End of Christianity, self-professed orthodox theologian and philosopher, William Dembski, challenges this YEC view with a spirited defence of OEC. While agreeing with Dembski's conclusions, I do not always agree with his methodology, and this difference caused me to start this blog on the dangers of dualities, ideologies, and the filioque clause.

The Debate

Let's begin with a stereotyping of the two sides. Despite the jacket blurb on Dembski's book, he is not  trying to set up atheists against theists, or even non-Christian theists versus Christian, but rather creedal Christian YEC versus creedal Christian OEC. This is an internal debate among Christians who take the Word of God seriously, who consider themselves heirs of the revealed word and defenders of the ancient faith, yet differ on this important aspect of Christian theology.

But before we get into particulars, let me first acknowledge that this has not been a hot-button of Christian theology for the first 19 centuries. There just weren't that many theological dogmas that rest on the YEC interpretation of Genesis 1. Now of course, the dogma of Gen 1:1 that "In the beginning, God..." was fundamental in the millennial fight between Christian worldviews and pagan creation myths, but that dogma didn't even extend to the next verse. Among Christians or between denominations, no one in the first 1900 years of the Church put more than passing importance on the duration and temporal location of creation. (At my own seminary, the importance of six days was restricted to whether one should keep sabbath observances or not.)

Why? Simply because 6000 years before the present was indistinguishable from "forever". Six thousand BC was a full 2000 years before historical records were kept, and therefore faded into the mists of pre-history. The other pre-histories that competed with the Bible were pagan mythologies, and were to be rejected on many other grounds. So the lack of development of the theology and hermeneutics of Genesis 1:2ff was not due to some sort of dogmatic consensus, so much as a lack of dogma altogether. It wasn't challenged, and therefore it wasn't presented as an essential dogma.

All that changed with the 20th century, and the development of modern geology as support for Darwinian evolution and atheistic materialism. For the first time, we had a pre-history story that contradicted the Bible and was not a pagan myth. Instead, it was a quasi-religious belief in methodological naturalism with a competing view of truth. Truth was no longer to be based on authorial revelation or tradition, but had to be substantiated from current experiment and (non-supernaturalist) theory. The struggle for the soul of modern man became the underlying theme of the Materialist 20th century, spawning three world wars and several cold wars pitting the forces of atheism against the forces of faith. It is in response to this sudden and violent attack in the 20th century that Christians fled to the fortress of Tradition. Mind you, many of these same Christians had disparaged Tradition as the medieval hwore of the Catholic Church, but suddenly welcomed it as the 20th century defense against Modernism.

In their attempt to defend the faith against modern paganism while differentiating it from medieval mistakes, we have a reworking of the entire Reformation agenda. It is a fascinating story, still unfolding, and reveals that we either have a second Reformation, or we have undone the first. How can we tell? Well if we have a 2nd Reformation then at the end of the day there will be a minimum of 3 groups: two from the first Reformation, and at least another from the second. If, on the other hand, we have undone the Reformation, then we will see a regrouping of RCC/Protestants, splitting into two parties but independent of how they voted in the first Reformation.  My own reading of the politics is closer to the second position than the first, that we are witnessing an undoing of the Reformation, a realignment along philosophical rather than historical lines.

But I digress. The YEC/OEC debate is about metaphysics, the nature of reality, or perhaps about epistemology, the method of obtaining truth. Just as the Reformers went back to the Greek and Hebrew as a more reliable tool for understanding the truth of scriptures, so we are seeing a polarization among the two sides today as to the proper way to interpret Scripture, though now in the original Greek and Hebrew of course.

The irony is that the Roman Catholic Church (RCC) warned the Reformers that going back to Greek and Hebrew would not solve their metaphysical/epistemological problem, that the Bible cannot be properly understood in any language without the context of tradition. Unsurprisingly in today's YEC/OEC conflict, the proper traditional context for the interpretation of Genesis 1 is precisely the topic of debate. Perhaps even more ironically, the RCC has itself changed its view on tradition in the meantime, demonstrating that neither the Reformation nor the Counter-Reformation was able to permanently solve the problem of hermeneutics, the proper way to interpret the Bible.

Because the reasoning behind this debate cuts to the heart of the Reformation, to the heart of Protestant theology, it is worthwhile seeing how orthodox thinkers handle the challenge. But it is of even greater importance, I argue, because it reveals an even older filioque split between East and West, between a Greek Trinitarian and a Latin Unduotritheism (yes, I made it up) lying at the heart of Western theology.  And it is this older split that may provide the way forward. But first, the current debate.

The Majority Opinion

The Enlightenment was a watershed in Western culture, and if you escaped high school history class without singing its praises, then you are obviously home-schooled. Many things happened all at once, and teasing apart the threads of causality from the tangled ball of history is well-nigh impossible. Materialists see it as the beginning of a non-religious worldview that culminated in the atheistic empires of the 20th century. Theologians see it as the decay of the medieval synthesis and the dethronement of Queen of the Sciences. Scientists see it as dawning of modern scientific theory, the triumph of Newtonian materialism, and the enlightening of the religious dark ages. In short, some profited, some lost, and many blithely continued on their way, but none remained unchanged.

The century before the Enlightenment was generally called the Reformation, and it was in that century many of the philosophical changes that would empower the Enlightenment were established: popular literacy, the restoration of historical lore/texts, the rise of the middle class, the breakdown of the medieval synthesis, the discovery of new lands/ideas/worlds, the rise of the rule of law, the fall of kings, the defeat of long-entrenched empires. In short, an upheaval in the heavens and the earth.

And during this upheaval, the Church was suffering a major attack on its theological worldview. One might divide the theological history of the Church into major epochs, with persecution forming the backdrop of the first three centuries, followed by a rebuilding (The City of God) in the next eight, culminating in the construction of a coherent, consistent theology in the 13th that lasted three more centuries. During the Reformation, this medieval synthesis came under increasing attack both from inside and out, leading to a contentious split between Progressives and Traditionalists, otherwise known as Reformation and Counter-Reformation. Sociologically, there had been a reformation every century or so throughout the middle ages, usually marked with the establishment of a new monastic order. But like a perfect storm, everything conspired to make the 16th century reformation something far bigger than the centenary spring cleaning.

It wasn't clear who was winning the Progressive vs Traditionalist battle for another century or so, but by the 18th century, with the US and French Revolutions, it was pretty clear that the Progressives had won. Over the next two centuries, those countries who were most Traditionalist--Italy, Spain, Ireland--lost more and more ground to those that were most progressive--England, Germany, France. Mind you, the Spanish were the Evil Empire in the 16th century, but today they are fighting to stay solvent.

Philosophically, this engineering mindset of the Enlightenment meant that Reason was elevated over all other faculties of Man. Everything came down to rational explanation, and if none could be found, say, for the presence of the human appendix, why then this lack (on God's part) was reason enough to reject Christianity! Reason alone was supreme, and woe betide anyone who relied on tradition, authority, or emotion, much less on the tarnished relics of a benighted past!

Theologians adapted to this onslaught, with Luther arguing for an invisible church, Puritans arguing for a personal faith, and Schliermacher arguing for a subjective religion that didn't need no stinkin' reasons. It was Kant's separation of the world into noumenal and phenomenal realms, what Stephen Jay Gould called Non-Overlapping Magisteria, or what Nancy Pearcey refers to as the Fact - Value divide. By turning inward, theologians attempted to deflect the attack of reason on the foundations of faith. Just as there is no accounting for taste, there need be no accounting for morals, for righteousness, or for worship. By fleeing to the land of immaterial irrationality, theologians hoped to avoid the artillery barrage of reason.

But does the land of irrationality have landmarks? Then those sneaky rationalists can set their artillery sights and flatten them. Is the Bible infallible? Then we will show how science contradicts it. Is aberrant sex immoral? Then science will show how morals evolve. Is truth absolute? Then science will show how truth is a sociological construct. So it happened that the displaced theologians were left in a featureless land, "as on a darkling plain Swept with confused alarms of struggle and flight, Where ignorant armies clash by night."

The Minority Report

What was the alternative? More tradition? My RCC friends are attending Tridentine mass in Latin. My Protestant friends are insisting on clinging to the Westminster Standards. (Here's a recent blog from World Magazine on this fight.) Both of these traditions hail from the Reformation, which causes my even more radical friends to rush into the Antiochian Orthodox church. Alas, the history of the Antiochian church is itself checkered with Byzantines and Crusaders vying for power. There just isn't any safe haven for traditionalists despite claims of "unbroken succession", which vanish under inspection like relics of the true cross.

"But wait!" the traditionalists complain, "you are applying that universal acid 'reason' to the very thing that replaces it, 'tradition'! It isn't necessary that modern historians grant their seal of approval to our tradition, only that we have properly implemented the tradition handed down by our fathers."

Umm, so where in the newly reconstituted Antiochian church is this "tradition" to speak English? In what Protestant church is the Nicene creed recited with or without the filioque clause? Or where in the papal bulls do we see a canonization of the Tridentine mass? Or for that matter, where in the US do congregations answer their priest "and with your spirit" instead of the poorly translated  "and also with you" as the Pope requested?

The point is that "tradition" isn't even very traditional. It has never been more than an outer ditch to slow down the progressive onslaught while proper defensive measures can be taken. Nor was it intended to be nor can it be fortified into a permanent defence, because the very act of constructing modern support destroys its existence as a tradition. As Edmund Burke explains so well, conservatives aren't against change, they are just against irreversible change. Conservatives value the past, and would make small incremental steps to adjust to the future. Tradition is not a defensive position, but an offensive weapon. We conquer the future with the experience we possess, and tradition is the weaponry inherited from our past. Then the progressive attack on tradition is merely a diversionary tactic of disarming us, which precedes our capture and execution. That is why defending our weapons is a losing strategy of conflict, since the armory was never meant to be a defensive structure.

Well, what are the weapons stored in this armory of tradition? They are of two types: word and authority. There are the revealed texts such as Moses' two tablets of stone, preserved in the authoritative ark of the covenant, an unremarkable box which derives its authority from its proper preservation of the same. Of course, we no longer have the box and stones, but we have authoritative copies. And as this authoritative text and its seals of authenticity are passed from generation to generation, the guarantees, the seals, the authority are preserved in the form of additional texts, resulting in a nested ark of texts, each surrounding and protecting the text inside. So in one sense we have only one type of weapon in the armory--text--albeit with much accrued filigree and decoration we call authority.

ADORO te devote, latens Deitas, quae sub his figuris vere latitas: tibi se cor meum totum subiicit, quia te contemplans totum deficit.
Visus, tactus, gustus in te fallitur, sed auditu solo tuto creditur; credo quidquid dixit Dei Filius: nil hoc verbo Veritatis verius.
O memoriale mortis Domini! panis vivus, vitam praestans homini! praesta meae menti de te vivere et te illi semper dulce sapere.
Iesu, quem velatum nunc aspicio, oro fiat illud quod tam sitio; ut te revelata cernens facie, visu sim beatus tuae gloriae.
And here is the version that appears in the 1982 Episcopal Hymnal,
Humbly I adore thee, Verity unseen, who thy glory hidest ‘neath these shadows mean; lo, to thee surrendered, my whole heart is bowed, tranced as it beholds thee, shrined within the cloud.
Taste and touch and vision to discern thee fail; faith, that comes by hearing, pierces through the veil. I believe whate’er the Son of God hath told; what the Truth hath spoken, that for truth I hold.
O memorial wondrous of the Lord’s own death; living Bread that givest all they creatures breath, grant my spirit ever by thy life may live, to my taste thy sweetness never failing give.
Jesus, whom now hidden, I by faith behold, what my soul doth long for, that thy word foretold; face to face thy splendor, I at last shall see, in the glorious vision, blessed Lord, of thee.
What does St Thomas (the authority) say about texts? Taste, touch, vision are insufficient for truth, only faith that comes through hearing. And what the Truth says, that for truth I hold. Jesus (the Truth) now hidden, I by faith (textual) behold, what I long for (purpose, end), the word (textual) foretold.

We come full circle back to the problem of texts. If taste, touch and vision are not as trustworthy as revealed texts, does this mean they are irrelevant to the meaning of the text, or that they are subservient to the meaning? Does the text live in our minds as a disembodied truth independent of sense experience, or does the text define how our sense experiences are to be understood?

The significance of this question is that Descartes and Locke and Hume guided the Enlightenment on a path of basing all Reason on sense experience. So if we replace Aquinas' words "taste, touch and vision" with "reason", we have captured the dilemma. Is Reason irrelevant to revealed Word, or is it subservient to it?

The Progressive says irrelevant. The Traditionalist says subservient. The OEC say geology is irrelevant to the truth of Genesis 1. The YEC say geology is subservient to the truth of Genesis 1. Which is right? Is modern geology with its 4.85 billion year old earth in agreement with a 6 day creation, or must we subjugate "taste, touch and vision" to the authority of the revealed text?

Dembski says "yes".
To be continued...
Email It | Print It | Trackbacks (0) | Flag as Offensive

When the National Academy of Science became religious

Frank Tipler, a widely published physicist who also finds himself in the Anthropogenic Global Warming (AGW) deniers list, has been recording the decline of the National Academy of Science. (Us Darwin-deniers knew that about the NAS long before AGW.) Their latest publication flogs the now dead horse of AGW. (What, you didn't think it was dead? Unfortunately, neither did Kevin Rudd. But Harry Reid does.)

Tipler's latest missive is on the difference between True Science and Cargo-Cult Science. As described by Richard Feynman in a lecture in 1974, the cargo cult was a religion of pacific islanders who saw the WWII cargo planes disgorging goods for 5 years and then abruptly vanish. So they carved airstrips out of the jungle, made wooden radar dishes and wooden radios and waited for the good times to resume. And a National Academy that values the benefits rather than the pursuit of science, will likewise be sitting around wooden antennae.

My question is what made the NAS go bad? Surely the vanishing returns of science would let them know they were on a wrong track. Where did their humility go? Here's my long-winded answer from a recent e-mail exchange.

There's a paradox inherent in these organizations (NAS, AGU, AAAS, ACS, APS etc), similar to the hedonistic paradox. That is, if we seek happiness, we never find it, but if we seek some greater thing, we find happiness. Chesterton talks about it as the necessity of risking one's life to save it. Jesus of course, said he who loses his life for my sake will find it. The paradox arises whenever we are attempting something that is not under our conscious control, so when we try to control it, we lose it.

When the NAS, a self-propagating invitation-only organization, purports to be the leading scientific organization in the US, it is attempting to control something it can't, simply because the "gift of science" is not under government or conscious control. A non-member may be far more gifted than a member. In former times, the NAS was humble enough to invite gifted individuals to join, but in these more political times, "leading" means "following the politics". By only inviting like-minded people, it has rapidly degenerated into a group-think club. Thus by defining itself sociologically, it has destroyed its sociological importance. Which, of course, is what sociology does to everything it touches. This is the nature of recursive science, which is why in former "unenlightened" times, it was the privileged domain of religion.

The only way to stabilize a "self-propagating, invitation-only" recursive club, is to have inviolable external criteria. This is why democracy without absolutes is impossible, because one can always hold an election to abolish democracy. This is why the Declaration says "We hold these truths to be self-evident that all men are created equal, that they are endowed by their Creator with certain unalienable Rights..."

It's all in the math.
I don't see that sociology necessarily has to perform this destructive function. And I am not sure that I know for sure what recursive means and why it is "the privileged domain of religion"?
I've been struggling with the meaning of the word "holy", and concluded that either it refers to something: a) in close proximity to something holier; b) inherently holy.

Applying this definition a few times, we should get a residue of things that are holy in-and-of themselves. After some headscratching, I concluded that these holy things are God, people and language. I've blogged a bit on this, here and here. The unique property about these three things, is that their definitions refer back to themselves, rather than to something else. "I am the I am", "Man is the measure of all things", dictionaries define words with words. This "referring back to oneself" is what I call recursion. And ontological recursion is the worst sort.

Recursive things are not like other stuff. Other stuff accumulates (linearly), but recursive stuff explodes (exponentially). Other stuff fades with time, but recursive stuff goes on forever. Other stuff can be taken in moderation, but recursive stuff is all or nothing. Other stuff stabilizes, recursive stuff destabilizes. Other stuff maintains, recursive stuff propagates. And we could go on with the analogies.

Thus recursion is sociological poison. It is like the smallpox only harder to eradicate. Historically societies have found ways to quarantine and neuter such dangerous stuff. We begin by giving it a name, and then a lot of protocols. We specify a special category of people trained to handle it. Occasionally it gets loose, and we have elaborate procedures for reconfining this thing.

And that is the fundamental nature of religion.

So when, say, the Fore tribe of Papua New Guinea eat the brains of the recently departed, they were infected with "kuru", a degenerative brain disease which turns out to be the first prion disease known long before "mad cow disease" spread through cannabilistic cow feed. In their culture, it was part of a religion, and the Australians demanded the practice stop. But there are all sorts of questions raised by prions, not least is what it is doing in the brain at all. It would seem that recursion can be deadly, and that prions are "kill switches" to prevent recursion.

Sex is another obvious recursion, with the ability to totally dominate the population. After all, if alpha males can impregnate 100's if not 1000's of females, why is the population 50/50 and still diverse? The usual explanation, (which is really begging the question in the manner of evolutionary biology) involves death and disease. Once again sex becomes a key feature of all religions, with all kinds of safeguards to avoid recursion. (I blogged on this here.)

Language doesn't seem so deadly, until you consider that this is the expertise of demagogues and dictators. But recall that it was Kurt Goedel's incompleteness theorem, a math theorem using recursion, that stopped cold Bertrand Russell's program of atheistic positivism.

Finally, the distinction between idols and God, is that idols eat you. You begin by valuing something of extrinsic worth, say money. But when money becomes intrinsically valuable, it becomes an idol that devours its worshippers, and we label one of the 7 deadly sins "greed". Recursion turned it into a monster. So it is with all idols that neither hear nor speak.

So when the NAS decided on a recursive definition of who they were, they let loose the Furies of recursion. When the education establishment founded a licensing scheme for teachers, they ate themselves. What does James say? "Not many of you should presume to be teachers, my brothers, because you know that we who teach will be judged more strictly." Who is doing the judging? Someone external. But if our society has no one above the teachers, if they become a law unto themselves, then their doom is sealed.

Likewise sociology, as a description of what makes society tick, is useful as was Aristotle's biology, which Ernest Rutherford dismissed as "stamp collecting". But when sociology attempts to be prescriptive rather than descriptive, to change society rather than understand it, then it becomes a recursive science sealing its fate. Hitler famously used the new sociological science of exit polling to influence German elections, but the end result was no elections at all. When the Democratic party started chasing polls as a way to get elected, we got leaders who can no longer lead. NASA had sociologists that told it how to educate the next generation to promote spacetravel (or whatever their goal was) and NASA started putting in all these educational supplements in its grants, but now it is being told its mission is Muslim self-esteem. Because sociology has a prior metaphysical commitment to methodological naturalism which denies purpose, then when sociology prescribes a purpose, death is the outcome. Like kuru, like idols, recursion is always very close to death. It is a holy thing.

This is why understanding holiness is not just a good idea, it's survival.
Could you offer a word or two on this below? I am always trying to understand what Goedel was up to. I am slugging through the recent essay on him in First Things.
... But recall that it was Goedel's incompleteness theorem, a math theorem using recursion, that stopped cold Bertrand Russell's program of atheistic positivism.
Thanks for all your questions. It's fun to talk about, even if it ends up taking a thesis to explain!

Logical positivists, such as the Max Planck Verein in Vienna circa 1900, were attempting to apply a radical materialism to philosophy and science. Max Planck himself didn't believe in atoms because he said they had never been observed. Only empirical observations were to be a valid base for inductive logic, and atoms could not be derived from deductive logic. This led to the founding principle of the positivists: no synthetic a priori allowed, which is to say, no creative assumptions, no metaphysical priors, no philosophical jargon masquerading as science.

It was Bertrand Russell's plan that by rigorously enforcing this ban on language and science, he could undo the Christian metaphysics that infected the Enlightenment, and begin a trajectory of scientific discovery that would escape the geocentric medieval synthesis. Thus language was to be rewritten as symbolic logic, and all illogical statements removed so as to exorcise the demons of metaphysics. He began a project with Alfred North Whitehead to found mathematics on seven, self-evidently true axioms (the Peano axioms), which he hoped would form the foundation of a new science and new society. His most promising graduate student was Ludwig Wittgenstein, who contributed the famous "Tractatus Logico-Philosophicus" (in German), whose title was intended to mirror Baruch Spinoza's famous thesis.

The whole positivist movement fell apart on multiple levels. Wittgenstein abandoned ship in his famous "Blue and Brown" books. The Nazis annexed Austria and viewed the Verein as seditious, causing the members to flee for their lives. Rudolf Carnap ended up at the University of Chicago, where his graduate student, Thomas Kuhn, was supposed to show from sociology how science progressed when it employed anti-metaphysical approaches. Instead he found the opposite, and published his thesis as "The Nature of Scientific Revolutions", which itself became a beachhead for sociology of science.

But in my mind, it was Kurt Goedel's little paper that undid the entire Russell program. Goedel showed that there are things that we cannot know. Not just "overlook" or "can't observe" or "have not yet solved", or "too ambiguous to resolve", but statements which are neither true nor false. In other words, our criteria of truth is too limited, it cannot always separate truth from falsehood. Reason has failed. The Enlightenment is over. Russell cannot exclude metaphysics from his math, because his own math is metaphysical! And no matter what Russell does to rescue his math, that rescue is also metaphysical. Metaphysics, whether we like it or not, is an essential part of reality that cannot be removed, avoided, or even ignored.

And the way Goedel did this was to use recursion, self-reference.

Now recursion is nothing new to mathematics. Its a well known method of solving theorems. One finds a relationship between x and x+1, and then keeps plugging it in to see where x+infinity ends up. No magic there, just a standard technique in the toolbox. But what Goedel showed was that when we do that with "rules", say, by numbering every theorem and then playing with the numbers, then we have a problem. Even without this trick, even standing firmly in number theory, in the past few decades recursion has led to "chaos theory" and the Mandelbrot set, demonstrating that recursion is a wickedly subtle technique.

Its been 75 years since Goedel's little paper, and the theorem stands firm. No one has found a way to get rid of recursion. Douglas Hofstadter wrote a 1979 book on it that greatly influenced me, "Goedel, Escher, Bach: an eternal golden braid." He pointed out how many ways recursion shows up in the "hard" problems of science. How does the immune system work? Are prions alive? Is there a cure for AIDS? What is consciousness? Is there an underlying linguistic grammar? What does a (biblical) text mean? How much computing power for np-complete problems? The stopping problem for Turing machines. And of course your and my favorite, is life intelligently designed?

All these problems at their heart, have a recursive kernel. That's what makes them hard. And since I don't believe that we are any smarter than the generations that preceded us, I think this recursive kernel was known to the ancients as "the holy". Regardless of what trivializing tripe has been written about ancient religious practice, it is evident they were tip-toeing around a nuclear bomb called "Holiness". Once we recognize both the significance of the thing, and the necessity of proper protocols, we can begin to appreciate the lore that (by trial-and-error perhaps) protected these ancient cultures from the Holy.

So it has become my self-appointed task to re-educate my generation on the power and importance of the holy, and our present extreme precarious position
.
Continuing the discussion....
I do know some of this history though you go to the heart of it - Which always for the modernist movements involves trying to remove 'the accretions' built up by the medieval tradition, right?!
1) The modernist, progressive 20th century made a fetish out of praising the Enlightenment for having overcome the medieval "dark ages". So it was with a bit of glee when the late Stanley Jaki argued that Newton's 3 laws were all developed centuries before Newton, and in particular, the law of inertia had been fleshed out by theologians in the previous 3 centuries. I think Denyse O'Leary likes to talk about "the things you know that aren't true". I also ran across some of that in Umberto Ecco's lectures on Galileo and Columbus that he gave back in the 1970's.
Verein meaning what?
2) Verein is just German for "organization" or club. What the encyclopedia likes to call "The Vienna Circle" was called by themselves "The Max Planck Verein". This club was in trouble after the Anschluss, the annexation by Hitler.
How is math metaphysical? .. Unseen like the Pythagorean theorem, triangles, etc. Math Platonism ? Is that what you are getting at?
3) The metaphysicality of math is of course, rejected by positivists and many modern materialists, who somehow imagine that arithmetic and language are as obvious as breathing. My brilliant advisor at Westminster Seminary, Vern Poythress, has a PhD in math and in NT, and has written on the necessity of Christian metaphysics to do math. That is, he is making more a point than the Platonists did about math. He's saying that it is Christianity that enables math to work. Here's a website to his published materials, and here's an article from his early days at WTS, when he was still talking about math.
I have and continue to look at writing by and about Goedel - What does it mean when we are told that he proved that math was incomplete. which is what you are referring to here. Can you offer a simple example for a simple man?
4) When Einstein wrote his famous 1935 paper with Podolsky and Rosen, he said that Quantum Mechanics was incomplete. Likewise Goedel demonstrated that math was incomplete. For both of them, it means that there are things that should be described by the theory, yet the theory can't tell. Suppose you tell your 3rd grader that every color is found in the rainbow, and your 3rd grader asks "where's brown?" If you say, "I can't tell", then your color theory is "incomplete". Likewise, Goedel showed that a theory of numbers could include number statements that were neither true nor false, the theory couldn't decide. Thus, it was formally "incomplete".
I read an essay in Commentary by the same name a couple years ago. I recall that it did not enlighten me much, though ??
5) Douglas Hofstadter was a bit of a one-act show. His spectacular book (especially in 1979) landed him a job at Scientific American to replace the late Martin Gardiner's famous "Mathematical Games" section. He called it "Metamagical Themas" based on an anagram of the name. But within a year he had so alienated the readership by sounding both condescending and trite that he was replaced. You might say he was ahead of his time, as the NYT says about science bloggers today.
So I'm not surprised you found his later articles unhelpful. In my youth we talked about getting a "swelled head", and very frankly, its taken me my 10 years of unemployment to fit back into the hatsize I had in Jr High.
Does Otto's book, the Idea of the Holy. help with your take? This paragraph helps. But it appears to be a god of the gaps argument - mystery - as the rationale for religion.
6) I ran across Otto's book in a footnote in a 1963 book, but the library didn't have a copy of it. I'm eager to read it, perhaps I should try Google books...
It is emphatically NOT that "holy" means "mysterious". What "holy" means is closer to "dangerous". If there is anything mysterious about Siberian tigers, it is that anyone who tries to study them disappears. In the same way, experts in the holy tend to get strange and/or disappear. You will note that the Bible warns against learning magic, and that upon the evangelization of Cyprus, (or was that Crete?) all the magic books were collected and burned. This is a good example of something "holy" and the effort to contain it. Think of it like radioactivity. There are ways to study it. But keeping a vial of Cobalt-60 on your desk is not one of them.

Well I've probably muddied the waters. But I'm attempting to demonstrate that "holy" has a positive, constructive definition which isn't "mysterious" or "unknowable".
Email It | Print It | Trackbacks (0) | Flag as Offensive

Where have I been this summer?

I hope these photos tell the story:
(a) 35th & Locust, Philadelphia, PA

(b) I-70 Runaway truck ramp, Frisco CO

(c) Church of Good Shepherd, Rosemont PA

Email It | Print It | Trackbacks (0) | Flag as Offensive

Shannon's non-local information and Pictish

In a previous series of posts, I have argued that Shannon Information can be used to identify life from non-life, design from non-design, but it needs to be augmented by looking at non-local correlations, or what I like to call Fourier-space information.

What is refreshing to see is just such an application of Shannon information applied to glyphs in an attempt to discover whether they are a form of language. Many people have used script as an analogy to life, and vice versa, with Stephen Meyer's book Signature in the Cell arguing that the DNA looks like a script.

Since script is much simpler than genomics, and since we all had to master it in grade school, the analogies are direct and simple. The power of the Phoenician invention of the alphabet is that a simple number of signs--twenty-six letters--can be used in various permutations to handle an almost infinite number of meanings--words.  Words are then combined into large groups called sentences, and sentences into even larger groups called paragraphs, and so on. Then taking the correlation length, the Fourier transform, would reveal information at the level of letters, (phonemes if it is a spoken language), words, and sentences.

Applying this to pictographic carving that has never been translated, say, the Indus Valley or the Easter Island, or Central American Olmecs, or the Celtic Picts, should reveal if it has the same "peaks" in the Fourier spectrum. To my great surprise, the recent work on Pictish cited Claude Shannon as the first to use this approach in 1951 to describe the English language. He didn't try to take a Fourier transform, but he achieved the same effect by a limiting procedure.

That is, Shannon took texts from books, made them all uppercase, eliminated punctuation, and made the space character a 27th letter of the alphabet. Then removing random letters from the text, he asked human subjects to guess what was the letter removed. With a little more work, he constructed tables for how difficult it was to predict a letter given one, two, or three letters prior to it. By subtracting the single-letter prediction from the double-letter prediction, he was able to calculate how much extra information the 2nd letter provided, or taking the 3-letter prediction and subtracting the double-letter he could calculate the info in the 3rd letter. In such a manner he could work up to 10 or 100+ letters. Others have extended his method, replacing the human subjects with neural nets or computer models, and find very similar answers.

In other words, Shannon was looking at the information in the Fourier realm, where each letter in a text was a one-dimensional integer distance away. From Shannon's paper, the information for each successive letter (given his human testers) was for 1, 2, 3, 4, 5, 10, and 100 letters: 4.03, 3.42, 3.0, 2.6, 2.7, 2.1, and 1.3, so you can see after enough letters, knowledge of the previous 99 doesn't change the guessing success very much.

We generalize this to a continuum of distances in 3-dimensions plus time. Since we are no longer doing discrete math, we use the continuous Fourier transform to describe our system. And unlike English, there might be all sorts of reasons why the information might have peaks (doesn't monotonically decrease). And this "structure" in Fourier space is as significant as "structure" in real-space.

To demonstrate the importance of "structure", the Pictish paper was able to use the "one-symbol" occurrence versus "data-set size" to demonstrate that the "words" were unlikely to have occurred by chance, and the "two-symbol" occurrence versus "two-symbol set size" to separate out syllables from heraldic signs and pictograms. They concluded that Pictish was probably written as words, not as syllables. But what is important for our discussion, is that this Shannon information (which he stubbornly called entropy in his paper) is calculated from a sequence of Fourier-like terms or I_f, whereas the data-set size is analogous to I_xt. Then when a cluster of data points has a circle drawn around them and are identified with "heraldic symbols" or suchlike, we can write this schematically as:
Constant > (I_x - Ix0)^2 + (I_f - If0)^2

This formula is very similar to our I_thresh < I_xt + I_f that we defined for life, if our "normalization" makes the data clump around the zero axis. If we were more clever, we could use the same approach as Pictish to separate cellular from multicellular life. But the significant point is that structure in the I_f vs I_xt distribution can be used to great advantage. Generalizing to 3D-time will only make it more powerful, since we will have more ways to "clump" the data, though of course, it will also make the math less tractable. [This is a plea for help to all you mathematicians out there!]

Finally, we note that "ideal gas entropy" which is generally discussed in physics text books, has little if any relevance to these calculations, because in Shannon's terms, it has no N-gram information where N>1. The locations of two gas atoms in a box gives you no help in finding the 3rd atom, and so forth. In other words, it has no I_f, no Fourier component, no structure.

So we find some odd confirmation from Pictish that our Shannon information measure, I_thresh,  was recognized by Shannon himself as needing Fourier components, as well as discover a graphical method of extracting signal from noise.
Email It | Print It | Trackbacks (0) | Flag as Offensive

Friday Impromptus - 6-11-2010

It's Friday, and I've been frantically putting out fires from friends who get alarmed easily.  So here is a collection of quenching emails passing through my bit bucket:

* For some reason (cynically: always the same) my colleagues at NASA have decided to make this the "year of the solar flare". Fox News carried this great pic with a scary title, Electronic Armageddon, warning us that the sun was going to destroy all our iPhones, or something. The "event" that everyone wants to use as the paradigm example, was the Hydro-Quebec transformer in Canada that blew up in 1989 when a solar flare triggered a spike in the 7100 miles of power lines thy have on the grid. This transformer left a lot of Quebec in darkness before getting the power back on line within 9 hours (worst case: a few days.) To put this in perspective:
a) the ice storm of 1998, which collapsed the electric transmission towers left much of Quebec in darkness for up to 33 days. Regular weather is far scarier than space weather.
b) the sensitivity of the electric grid to the sun's outbursts depends on being very long, and very arctic. The next longest transmission lines after Hydro-Quebec are 2100 miles long, and are in the USA, far lower in latitude. And they have never had a space weather event.
c) changes to the design of transformers after the 1989 event, meant that the next series of solar flares in the next solar cycle, including the off-the-scale 2003 event, did no damage. We are now entering an unusually  weak solar cycle, which should calm fears further.
d) numerous changes to the grid have occurred as failures, such as the Cleveland power blackout in 2003, cause power designers to make their switches smarter.

The conclusion? The Sun should be the least of your worries. So why is Congress debating The Grid Reliability and Infrastructure Defense Act? I suppose because they think the potential loss of billions of dollars to power companies isn't sufficient motivation for them to prepare for an outage, and they need some sort of government meddling to properly prepare. You know, like all that government red tape about drilling for oil in the Gulf really helped the US prepare for Deepwater Horizon. Not.

No, I don't think you have to have a rational reason for this bill, merely a political reason. After taking over the banks, the car companies, a d the oil companies, Congress is setting its sights on the power companies. What could be more natural?

But isn't there some fear of a nuclear blast that generates EMP and wipes out the power companies? Some, but not too much. I mean, if a nuclear bomb went off over the Washington DC power plant, there would me other concerns much more pressing than the loss of electrical power to the WhiteHouse teleprompter.  Yes, we should harden our power grid for all kinds of good reasons, but fear of widespread EMP is way over-hyped. In my darker moments, it is because for some people, the loss of human life is less significant than the loss of their electronic gadgets. And you've got to stir up your base, you know.

* In the last 12 months, biologists have been hogging the limelight. We've had two missing links, Ardi and Ida discovered and one synthetic life created. Only they turned out not be links to humans, and so they weren't actually missing, and Venter didn't exactly create anything, he just sort of copied it, using living cells to do the actual copying. So in other words, it isn't just physicists who are full of hype, but it appears to be a widespread disease. The "missing link" fellows converted the notoriety into cold cash, so perhaps its not even a new disease, merely the creeping corruption of our age now infecting the ivy-covered halls of academe.

* After the wild success of the IPCC in influencing every nation in the world, thousands of NGO hacks are wondering how to form their own UN agency. France has teemed up with Japan to propose another UN committee, this time to monitor "species". Forget for the moment that we don't know how to define them. And forget too, that it is an abstract concept, containing such things such as "brontosauri", "unicorns" and "gryphons". Because forming government agencies to inform or monitor or control something or other is not about knowledge, but about power. But in this case, I'm struggling to understand what is so wonderful about "species diversity" or whatever it is that justifies the grab.

a) The tropics have more species diversity than temperate zones. Does this make them "better"? Most of the worlds food and wood and renewables come from the temperate zone. So why is it important to monitor or (gasp!) control species diversity in the tropics?
b) The usual argument for why the tropics are so diverse, is that it is a consequence of a ruthless Darwinian struggle in a system that has no stable equilibria. If that be the case, then how can external meddling, and presumably a feedback mechanism to stabilize it, possibly result in _greater_ diversity? Wouldn't it do the exact opposite?
c) If I were a poor tropical country, I can see why I might like the idea of having power over the big and rich (because of their superior natural production) temperate nations. But why would a temperate nation want to get on board with some sort of artificial commodity called "specie diversity"? Is this the same game of turning a fertilizer into a pollutant? (Yeah, I know that supposedly species diversity is a "genetic resource" that will enable a cure for cancer and genetically modified corn. But now that Venter has promoted "synthetic" life, who needs the real thing? Don't we use computers to search for new drugs nowadays, and not vast herbal surveys?)
d) Oh, and recent papers (and here) suggest that much of the "genetic diversity" of the tropics is hype, based on bad models.

So why is it that the more I study the more this looks like the Carbon trading scam?

* Finally, not to overlook some more Congressional foolishness, there's Larry's Law, inflicting the inanity of Harvard's feministas on innocent bystanders. Tierney's NYT article is entitled, Daring to Discuss Women in Science  and suggests that setting quotas for women in the sciences won't really accomplish too much. Of course, if you agree with Tierney, it is a small step to seeing the inanity of racial quotas, national quotas, and/or political quotas. (If only there were a quota for Republicans!) Which is to say, there's a lot of similarity with Hernnstein and Murray's reviled book, The Bell Curve.

But more importantly, Tierney points out that most of the objections raised against H&M, and also raised against Larry Summers (the ousted president of Harvard) have been refuted by recent research. There really, really are differences in math ability between men and women, just as there are really, really differences between races. Saying it ain't so doesn't make it go away.

But if there really are measurable mental differences between men and women, or between races, how then do we outrun the "evil Nazi empire" that is cackling in our ear? How do we avoid somebody, somewhere making the argument that a nation can be improved by purging it of inferior races, subjugating women, and establishing a nobility? I mean, all this "diversity" nonsense has no economic or scientific or even societal benefit. Does your business make more money if it is diverse? Does it make your academic department more productive? Does it make your parties more fun? (Other than vive la différence, of course.) I didn't think so.

Recently philosopher Michael Ruse attempts to defend Darwin from Hitler saying,
“Darwin was explicit that when the races met and (as so often was the case) the non-Europeans suffered, it came not from intellectual and social superiority but because non-European caught the strangers’ diseases and suffered and died.”
Its a brave defense, though not actually due to Darwin, it was Jared Diamond's thesis in "Guns, Germs and Steel", arguing that even though Darwin's theory leads to superior races, in the case of humans, the random noise of the environment completely swamps the "superior gene" effect. I see it as a desperate attempt to claim randomness can both force evolution and prevent it at the same time, or at least, prevent the moral consequences of evolution.

However, there is perhaps the shadow of an answer in one of the common objections to H&M. H&M use yearly normalized IQ scores. If they had used unnormalized IQ curves, then 100 years ago we were a nation of imbeciles. In point of fact, westernized nations have been showing an increase in the mean IQ of 3 points per decade, otherwise known as the Flynn effect.

That means that "racial averages" are not static. It means that "racial averages" are not genetic. H&M would like us to believe that identical twin studies and statistics can separate the IQ into a "nature" and a "nurture" component. But the Flynn effect doesn't fit either category. "Nature" should be on timescales of centuries (or the speciation rate or something like that), whereas "nurture" should be on the timescale of a few years (however long it took to nurture a child.) Instead we have this trend that has a wavelength of at least one century, far too fast for genetic, and far to slow for nurture. This is what I call the "epigenetic" timescale.

Ethically, these timescales are important to separate. If "nurture" dominates, then the parents are "wrong" for raising their children stupid. If "nature" dominates, then eugenics is justified, and we should all get used to being racists. But if "epigenetics" dominates, then we need to reconsider multi-generational ethics, what we used to call "coming from a good family", or  "being part of the nobility".

Simply put, H&M's conundrum, Ruse & Diamond's desperation, and Congress' foolishness is that the New World has rejected the Old World wisdom, and is reaping Third World consequences.
Email It | Print It | Trackbacks (0) | Flag as Offensive

Voom! Evolution in Fourier Space: appendix

In my previous 4 posts, I attempted to demonstrate how the definition of life as an information threshold, combined with the non-locality of information, and the inversion of space-time at an "event horizon" give us a handle on the origin-of-life (OOL).

But what I failed to do, was to give a complete example. While racking my brain, I realized that Craig Venter's claim to have made artificial life was a perfect example. Now I know all the criticism of Venter: he copied an existing genome, he used a living cell to culture his copy, his copying machine involved biological reagents, etc. But grant me the privilege of pretending that he accomplished what he really wanted to demonstrate, and that was the making a new genome that had never existed before since the beginning of time.

Let's pretend he took a DNA pattern for ribase from a cat, the DNA for phosphatase from a mushroom, and the DNA for actin from a squid. Just because he could, he ran his computer simulations of cytochrome-C until he found a modification that made it self-destruct at precisely 42C. So not only has he produced a chimera that has never existed, but he put a signature "designer enzyme" in it. When he cultured and grew the thing, lo-and-behold it was a Mycoplasma shaped like the letter "V" with two flagella on the end.  Now the world will have to acknowledge that this organism is truly "artificial", even if most of it is standard stock issue.

Now if Venter had done this, say, 550 million years ago, we would say that Mycoplasma had "evolved" a new function. it looks just like the fossil record for evolution. Likewise, it meets all the requirements for an "information injection" which ID attributes to a designer, and in fact, Venter is an intelligent designer.

Since this is our test case, my question becomes, what does this look like in terms of an information threshold, I_thresh? As far as I_xt goes, one new enzyme with a few base pair changes, is nearly indistinguishable from the previous edition (though still far above threshold). So we don't see much change in I_xt, and if we are evolutionists, we might even convince ourselves that these small changes are indeed random.

But what about I_f? Well, that is asking the question of how the rest of the world is "correlated" with this change. Venter himself said it was the culmination of a 10 year effort. So we have a time-correlation of at least 10 years going into this one change. In addition, Venter said new techniques had to be developed, laboratories equipped, papers written, seminars attended, perhaps 1000 people were directly involved in this accomplishment. Indirectly we have the entire genetics community and the businesses that support them, perhaps another 10,000 spread out over the globe. So our correlation length goes out to the size of the world. All pre-adapted before the event in the lab that was celebrated when the first organism divided and grew.

Going back to the analogy with OOL, this "creation event" took correlations in space and time and focussed them down on a single event. This event then took off and had a life of its own, at least, until the cell-culture was exterminated, perhaps by lack of funding 5 years from now. So pre-life information was diffuse spatially (and perhaps temporally), but became focussed on a 2 micron spatial scale with linear temporal existence, until the cell line was terminated 5 years later, at which time the information diffused away spatially, leaving behind a shadow in the literature with a temporal shelf-life of centuries.

Each creation event is a mini-big bang, a miniature event horizon. At each of these "pops" the space and time coordinates invert, and information moves from the diffuse spatial realm to a focussed spatial realm, or from a diffuse temporal realm to a linear time line.  Each of these "pops" resembles a death if the movie is run backward. Then the "design" in ID, is the appearance of these information "pops" over time, which correspond to a concentration, a focussing, a transformation of spatial to temporal information.

All of this can be observed and described without having much detail about Craig Venter's personal life. The "pop" does not rely specifically on Venter's brain, or on his specific experiences, but is a general property of "intelligent actors".

If one day James Cameron decides to go beyond 3D movies and make a "holographic movie" then this would be a good analogy of evolutionary history. The diffuse spatial information in the foggy hologram of the film is converted to a sharp image in space, and the timeless information on the reel of film, is relayed at 32 frames/second to form a linear and limited temporal existence of the holographic plot. "Life-like movie creation" is the concentration of diffuse spatial and temporal information into a small location inside the movie theater, in the same manner as real life creation, as exemplified by Venter.

Whether they intend it or not, Venter and Cameron are filling out the details of what an ID designer does. Therefore we can no longer claim that ID "has no mechanism" for evolution. It most assuredly does. But it is non-local.
Email It | Print It | Trackbacks (0) | Flag as Offensive