Information Digestion Model

- Conceptual Models developed to make sense of Data/Math Correspondences
- Information Digestion Model: Core Postulates – Living Matter: a synergy of Matter & Information Digestion
- Contrasting Assumptions: the Matter Model & the Information Model
- Matter & Information Planes have unique Fields of Action.
- What factors 'forced' us into making our Assumptions?

What is the Information Digestion Model? What does it describe? Why is it necessary? What are postulates? What relevance do they have? This article attempts to answer these questions.

Euclid wrote **Elements** circa 300BC. This book laid the foundation for geometry (literally, 'earth measurement': Greek). As a testament to its lasting influence, **Elements** was employed as the primary textbook for the subject for over 2000 years. This book also provided an excellent example of deductive thinking at its best. "**Elements** epitomized the axiomatic-deductive method for many centuries." (Encyclopedia Britannica) In other words, Euclid's method of deductive reasoning is hard-wired into our cultural consciousness. It could even be claimed that the structure of Euclid's **Elements** provided the model for the mathematical proof and even the scientific method.

What is the axiomatic-deductive method? First, postulates (axioms) are established. Then, a system of theorems and corollaries follow by deductive necessity. In similar fashion, a mathematical proof starts with givens, then follows step-by-step to a definitive, logical conclusion. In the ideal debate or discussion, definitions are established. Critical thinking is then applied to these definitions to reach a reasonable decision. In each case assumptions are made (certain ideas are assumed to be true), then reasoning is applied to these assumptions to draw logical conclusions that are true by necessity.

The beauty of Euclid's geometry is that only 10 postulates and axioms (the givens) are needed to generate hundreds of theorems that are the heart of an entire branch of mathematics. For some 2 millennia after the book was written, most believed that these postulates were inherently true. The implicit assumption was that innate truth existed independent of context. This mindset was shaken due to an investigation into Euclid's 5th postulate, the parallel postulate - the idea that parallel lines never meet.

For centuries, leading mathematicians from many diverse cultures attempted to prove that the parallel postulate was instead a theorem that could be derived from the 9 remaining postulates. This included great thinkers from ancient Greece, the Arabic empire, and Europe. The speculation was that the parallel postulate was an unnecessary given. At one point, proving the parallel postulate was considered one of the 4 great problems of mathematics. Whoever could derive the proof would win great fame. But, try as they might, no solution was found. After spending his productive career in search of this illusive proof, a Farkas Bolyai warned his son:

"For God's sake, I beseech you, give it up. Fear it no less than sensual passions because it, too, may take all your time and deprive you of your health, peace of mind, and happiness in life."

Finally a few mathematicians, including the son, tried a different approach. Instead of attempting to prove the parallel postulate, they investigated what would happen to the mathematical system, if the postulate were tweaked in a variety of ways. Around 1830, Nikolay Lobachevsky and János Bolyai investigated a geometrical system where 2 parallel lines can be drawn through the same point. This was the beginning of non-Euclidean geometry, i.e. a geometry that was not founded upon the parallel postulate. Generalizing these ideas, Bernhard Riemann established the foundations of this new type of geometry around 1870.

Initially, this fresh perspective was ignored, as it contradicted the contemporary mindset. Prior to this century, Euclidean geometry was generally assumed to be innately true. Lobachevsky's mathematical system challenged this notion. Due to the controversial nature of the insight, Gauss, the leading mathematician of the day, wouldn't even come out with public support for Lobachevsky, even though he supported his system in private letters. Without this support, Lobachevsky lived a life of obscurity. However, to indicate how truly revolutionary his insights were, Lobachevsky was eventually deemed the Copernicus of geometry.

After Riemann's generalizations brought non-Euclidean geometry into the mainstream, logical thinkers began to realize that truth wasn't inherent to the system, but instead derived from the postulates, the givens/assumptions. As an example: the parallel postulate leads to the theorem that all triangles contain 180º. With a slightly different fifth postulate (assumption), it can be proved that all triangles have more than 180º, or less than 180º with yet another given. The number of degrees in a triangle is not inherent to the triangle but dependent on the assumptions/givens.

Initially, non-Euclidean geometry was assumed to be a mathematical curiosity without practical application. Hence it seemed less real. As evidence, Lobachevsky even called his new mathematical system, imaginary geometry. Fairly quickly, it was found that this deviant geometry was a better fit for significant aspects of reality. Euclidean geometry applies to flat surfaces. One type of non-Euclidean geometry applies to concave surfaces and another to convex surfaces. In Euclidean geometry the shortest distance between two points is a straight line. This seemingly self-evident fact is not true if the surface is curved. In this case, the shortest distance between 2 points is a curved line.

We experience this seemingly counter-intuitive notion when traveling around the globe in a jet. The surface of the Earth is convex. We can't fly through the Earth. The jet takes a route over the poles to take advantage of the insights of non-Euclidean geometry. Another example: Einstein proved that gravity bends the space-time continuum. As a result, non-Euclidean geometry is also more applicable when objects are moving close to the speed of light, for instance the planet Mercury.

In the 1930s, Tarski proved that logic does not and cannot determine truth. This proof represents further support for the notion that truth is not independent, but is instead determined by our assumptions. This mathematical relativism provides a different set of questions to investigate. We can no longer ask: "Which logical system is true, or even more true?" or "Which mathematical system makes the most sense?" Instead we must first ask: "Which mathematical system provides the best fit for the empirical data?" Once this is determined, we can then ask: "Which conceptual model provides the greatest explanatory power for the math/data synergy?"

The math/data synergy that is at the heart of the conceptual model forces us to make counter-intuitive assumptions that some might claim are unbelievable.

The subatomic world provides a great example of this process. It doesn't make sense that electrons and photons move backward and forward in time. But we must make this assumption if the mathematical system is to fit the facts. In his book **QED**, Nobel Prize winner Richard Feynman states, "I have delighted in showing you that the price of gaining such an accurate theory has been the erosion of our common sense. We must accept some very bizarre behavior." (**QED**, p. 119) In similar fashion, Isaac Newton was forced to make the then radical assumption that 2 objects attract each other from a distance (the force of gravity) to make sense of the connection between his mathematical system and the empirical data. Einstein was also forced into making the counter-intuitive assumption that the space-time continuum is curved to make sense of his results.

The Living Algorithm generates a mathematical system based upon the manner in which it digests data streams. The Living Algorithm's mathematical system exhibits many patterns of correspondence with an abundance of diverse experimental evidence – behavioral, biological, and neurological. To make sense of these correspondences, we developed a model of Information Digestion. As with Feynman, Einstein and Newton, we were forced to make some seemingly radical assumptions. In the next section, we will present the underlying givens/postulates/assumptions of our mathematically based Information model, one of which is mental energy.

Once we have listed our givens, we will examine the assumptions behind an alternate conceptual model - the Matter model. For instance, the Matter model assumes that there is only physical energy, while our model assumes the existence of both physical and mental energy. We suggest that these alternate assumptions represent the current mindset of the scientific community – even the implicit assumptions of the college-educated population. As we've discussed, Euclidean geometry is applicable to flat surfaces, while non-Euclidean geometry is appropriate to curved surfaces. Similarly, the Matter model is more applicable to certain features of reality and less applicable to others. We hope to illustrate that the Living Algorithm-based Information model has greater explanatory power for significant features of living behavior than does the Matter model.

Although our Information model is based in the Living Algorithm's mathematical system, it is conceptual. Similarly Isaac Newton's gravity model, Einstein's space-time model and Feynman's subatomic model are also conceptual, although based in mathematical systems. Hence, our assumptions do not have the mathematical rigor of Euclid's geometry, any more than does Newton's assumption of attraction at a distance, Einstein's warping of space and time, or Feynman's moving backwards and forwards in time. These are all conceptual attempts to make sense of the perceived patterns of correspondence between a mathematical system and empirical data.

1. Matter exists. Matter, in this context, is defined as atoms and collections of atoms, i.e. molecules. In other words, Matter is the substance of our molecular world. Physical dynamics determines Matter’s behavior. (Note: the subatomic world is excluded from this definition.)

2. Living matter exists. Living matter, i.e. Life, in this context, is defined as single cells and multi-cellular organisms. Physical dynamics determines the behavior of Life’s material component. (Note: viruses are excluded from this definition.)

3. Living systems are required to digest environmental information in order to fulfill potentials, survival for example.

4. In addition to a material component, Life has an immaterial component. The immaterial portion is based in the process of *information digestion*.

5. Life's two components, matter and information, form *orthogonal* planes. The intersection of these planes is Life's field of action.

6. Living Systems digest information via the *Living Algorithm*. As a corollary, *Information Dynamics*, the mathematical system generated by the Living Algorithm's digestion process, shapes the behavior of Life’s immaterial, information-based component.

Note: material dynamics applies to matter in both the cellular and molecular worlds.

These core assumptions/postulates are a way of making sense of the many patterns of correspondence between the Living Algorithm's mathematical system and empirical evidence. Because information is central to our model, we refer to it as the Information model. Let's examine some alternate assumptions. Because matter plays a central role in the alternate model, we refer to it as the Matter model.

The first 2 postulates are not controversial. Most agree that both matter and living matter exist. In fact, academia institutionalizes this dichotomy in their division of departments into Material Sciences, including Physics and Chemistry, and Life Sciences, including Biology and Botany. Chemistry is further divided into organic and inorganic chemistry. The former deals with the chemistry, the molecular interactions, of living systems and the latter with the chemistry of exclusively material systems.

Despite this division, many cognitive scientists, if not most, feel that living matter is a subset of matter. In fact, it could be argued that this is a prevalent mindset of the intelligentisia. As such, the 4th postulate represents a significant divergence from the norm. The assumption that Life has an immaterial component as well as a material component is sure to raise more than a few eyebrows.

Although many believe that Life is a subset of Matter, most of these would agree that living systems must digest environmental information in order to fulfill potentials, including survival (the 3rd postulate). The materialist mindset suggests that Life processes data in an electronic fashion, like a computer. Our thinking (our ability to process data) is due to biochemical interactions. A few molecules get together as neurons; electricity bounces around our brain; and we feel happy, enlightened, inspired or sad. An abundance of evidence provides support for this materialist viewpoint. For instance, if a part of our brain is injured, we are unable to access our complete intelligence. In contrast, the Information model's 6th postulate states that living systems digest information via the Living Algorithm.

The implicit belief of the materialist perspective is straightforward: the automatic interactions of living matter determine how we think and how we behave. All matter is ruled by material dynamics and humans process information like a computer. This alternate set of assumptions belongs to the Matter model.

In contrast to this exclusively materialist assumption, our 4th postulate assumes that the digestion of environmental information that is required of living systems occurs on an immaterial plane. We don't know exactly where the information digestion plane exists, except that it is not material. Just as Feynman and the Physics community refuses to speculate about how photons can go backward and forward in time, we refuse to speculate about where living systems digest information. This assumption is merely our way of conceptualizing the patterns of correspondence between the Living Algorithm's mathematical system and an abundance of empirical evidence.

Under this way of thinking, living matter is not a mere subset of matter. Instead, living matter is the intersection of the orthogonal planes of matter and information. This intersection is Life's field of action. The reason is straightforward. According to the Information model, Life has an immaterial component based in information digestion that Matter does not have.

However, this interpretation emphasizes content over process. In fact, the behavior of living matter is far more important to us than is the content. The content‘s importance is its relevance to behavior. Information's 6th postulate states that Life digests information via the Living Algorithm. As a corollary, Life's information-based component is guided by data stream dynamics. While the behavior of matter is determined solely by physical dynamics, the behavior of living matter is influenced by both physical and data stream dynamics. In other words, the intersection of material and data stream dynamics determines living behavior.

It is evident that the Matter Model and Information Model share some common assumptions and diverge significantly on others. What is the relevance of the differences between the two conceptual models? What is the application, if any, of the Information model to reality?

To answer these questions, let us first review the similarities and differences between the 2 conceptual models – Information and Matter. Both models include the first 3 postulates. Both Matter and Living Matter exist. Living Matter processes/digests environmental information in order to fulfill potentials, including survival. The divergence occurs within the remaining postulates.

The Matter-based model includes the assumption that living matter is a subset of matter. This leads to the theorem: physical dynamics determines the behavior of living matter. The theorem leads to the corollary: living matter processes information in a material fashion - like a computer. The Information Model includes the divergent assumption that living matter is a synergy of matter and information. While physical dynamics dictates the behavior of the material component, data stream dynamics dictates the behavior of the information component. The Living Algorithm's mathematical system determines the dynamics of a data stream.

Let us see where these 2 divergent models lead. We are not attempting to determine which is true. Remember: Euclidean geometry best describes the properties of flat surfaces, while non-Euclidean geometry best describes curved surfaces. Similarly, our 2 models best describe different features of reality. They have different fields of action. The diagram at right contains a partial list of the different features of empirical reality that the 2 models are able to address.

The question mark indicates that the Matter model is unable to provide a satisfactory explanation for many significant features of human existence. The arrow does not point both ways because the Information model contains the Matter model within its scope, but not vice versa. The Matter model denies the existence of independent Information digestion in its postulates. In other words, the Information model is inclusive, while the Matter model is exclusive. Instead of living matter being a subset of matter, the Matter model is a subset of the Information model.

The Information model with its unique set of assumptions was developed as a way of understanding the patterns of correspondence between living behavior and the mathematical behavior of the Living Algorithm system. The desire to conceptualize the Math/Data synergy regarding significant features of human behavior, led to the above assumptions. Similarly, the assumptions behind the Matter model arose as a way of understanding a different set of correspondences. Unfortunately, due to conflation, an innate cognitive process, we tend to attribute reality to our models, rather than understanding that models are inherently metaphorical in nature. For instance, due to the prevalent scientific mindset, many believe the Matter model to be true, rather than just a model.

Reiterating for emphasis, conceptual models, such as the Information and Matter models, are developed to make sense of the fit between mathematics and empirical reality. The desire to conceptualize the Math/Data synergy leads to or even 'forces' certain assumptions. How did the patterns of correspondence between the Living Algorithm system and empirical reality 'force' the assumptions behind the Information model?

Let us begin with the question: Why we were 'forced' to assume that living matter has a non-material component (Postulate 4)?

The Matter model has not provided a plausible explanation for any of the phenomena listed above right. Our need for Sleep is high on the list of these unexplained phenomena. It is manifestly apparent that exclusively matter-based explanations are insufficient to encompass the wide scope of human behavior. Due to this lack of potency, we were 'forced' into entertaining an alternate assumption. There must be another plane of existence besides the material plane. (Similarly, due to the inability of a prior model to account for certain data, Einstein was ‘forced’ into assuming the counter-intuitive notion that the speed of light is constant.)

Why were we 'forced' to assume that this alternate plane of existence is based in information digestion and that the Living Algorithm provides the method of digestion? (Postulates 5 and 6) There are a complex of reasons for these assumptions.

1) The mathematical behavior of the Living Algorithm's model, independent of the Information model, mimics the behavior of many unexplained phenomena, including sleep. For an in-depth examination of these correspondences, read the *monograph* – *The Triple Pulse of Attention*. These correspondences between the Living Algorithm and many empirical phenomena, which include sleep, is the Math/Data synergy that generates a conceptual model and forces assumptions.

What is the mechanism behind this particular synergy between Math and Data? Are the many correspondences due to some confounding variable? If not, is there a conceptual model that makes sense of this synergy? To develop a plausible model, we asked the question: Could it be that living systems employ the Living Algorithm to digest information?

2) To answer this question, we did an extensive comparison of a variety of systems, including Physics and Probability. It was seen that the Living Algorithm system is the only mathematical system that met the requirements for living information digestion. These requirements included providing up-to-date information about data flows as well as the possibility of environmental interaction and informed choice. For this analysis, see the *monograph* – *Mathematics of Living Systems*.

From this analysis it seems that living systems may employ the Living Algorithm to digest data. How does this tentative conclusion help explain the patterns of correspondence between the Living Algorithm’s Triple Pulse and multiple sleep-related phenomena (the Math/Data synergy)?

3) In a mathematical analysis, it was seen that data stream dynamics is at the heart of the Living Algorithm's mathematical system. Data stream dynamics and material dynamics share the same underlying structure. More importantly, data stream dynamics provides some plausible reasons (causal mechanisms) behind the Living Algorithm's correspondences with empirical reality (the Math/Data synergy). For more, check out the monograph – *Data Stream Dynamics*.

The interlocking redundancy of these 3 unique methods of investigation 'forced' us into postulates 5 and 6. 5) Living systems are influenced by information digestion, independent of matter, and 6) the Living Algorithm provides the method of information digestion. Let's summarize. The 1st monograph connects mathematics with experimental data. The analysis suggests that a Math/Data synergy exists between the Living Algorithm and empirical reality regarding sleep. The 2nd monograph established that it is a plausible assumption that Life employs the Living Algorithm to digest data streams. A corollary to this assumption is that Life would be subject to the processes that arise from the Living Algorithm's method of information digestion. The 3rd monograph developed a mathematical system based around the Living Algorithm's process of digesting information – data stream dynamics. Data stream dynamics provides a plausible causal mechanism that links the Living Algorithm's information system with human behavior. The existence of this causal mechanism indicates that humans could indeed be subject to the Living Algorithm processes.

The reasoning and predictions are relatively straightforward. If Life employs the Living Algorithm to digest data streams and data stream dynamics determines the behavior of the Living Algorithm's mathematical system, then living systems are also guided by data stream dynamics.

In other words, our assumptions and analysis indicate that data stream dynamics should influence Life. Our Information model leads to this prediction. If the evidence contradicts the prediction, revisions must be made in the assumptions of our conceptual model. If the evidence supports our prediction, this provides confirmation for our conceptual model and its postulates. The first monograph provides the supporting evidence. These intricate congruencies reinforce our assumptions.

Explanations based upon the Living Algorithm's method of information digestion seem to provide a deeper understanding of significant features of human behavior, such as sleep. In contrast, matter-based explanations are unable, as yet, to provide any insight into these same behaviors. . Due to the explanatory power of data stream dynamics regarding significant features of human experience, combined with the impotency of the Matter model, we were 'forced' to assume that living systems employ the Living Algorithm to digest data streams. This assumption is at the heart of our conceptual model - the Information model.

Remember, we are not saying that the postulates are innately true or that they are logical conclusions. We are merely stating that they are plausible assumptions that lead to distinct predictions. If empirical evidence contradicts the predictions made from these postulates, revisions must be made. This refinement of postulates is the essence of scientific progress.

Initially, it was thought that light is a wave. This postulate was revised when more precise data came in. Copernicus' postulate that the Earth circles the Sun revised the common sense notion that the sun circles the earth. In turn, Kepler's postulate that planets move in elliptical orbits refined Copernicus' circular postulate of planetary motion. Each revision was based around a better fit between data and mathematics. The Math/Data synergy forced the revisions. Kepler, Newton, Einstein, and Feynman were all 'forced' to entertain counter-intuitive postulates to make sense of the Math/Data synergy. Similarly, due to the fit between mathematics and empirical data, we were 'forced' to entertain the assumptions that living matter is a synergy of matter and information digestion and this unique form of matter employs the Living Algorithm to digest data streams.

To see why we were 'forced' to make the assumption that mental energy exists and that it is funneled/channeled through Attention, check out the next article in the stream – *Postulates 2*.