Home Science Page Data Stream Momentum Directionals Root Beings The Experiment

First, let us review our Physics with Newton's famous equation for Force.

Next, let us review our equivalencies between Physics and Data Streams.

Now, let us apply our equivalencies to Newton's Force Equations.

What is the DS Acceleration? It is the change in velocity. We have equated the Data Stream Average with the velocity of the Stream. Hence the DS Acceleration would be the change in DS Averages. We derived the contextual equation for the change in DS Averages in Section 2C above. We called it the Difference of consecutive Means. It is shown below.

This reflects the observations of our pie diagrams. The change in the average equals the difference between the new piece of pie from the new data bit and the old piece of pie that is removed from the old average, the residue – the left over piece of pie.

Note again that because N grows larger and larger with each subsequent piece of Data that the DS Acceleration and hence the Force also grows more stable with time. This is not desirable for the same reasons mentioned above with Data Flow. This turns the measures of our Live Data Stream into something static. Static measures are not effective at describing Life. We need measures that are more sensitive. {This problem is dealt with effectively in the Notebook, Decaying Averages.}

We have talked about the real and potential impact of data upon the Mean Average. Now we are going to introduce another useful concept, Proportional Impact. Before we do let us discuss some of the limitations of Real Impact. The real impact of Data upon a DS Average is a scalar number that cannot be easily compared with other real impact of other data streams.

A short conversation between two Santa Barbarans of the same age:

Andrianna, excitedly: "We just went to Los Angeles."

Marsha, disinterestedly: "So! We go there all the time."

Andrianna: "You don't understand. This is the first time I've ever been out of Santa Barbara in my life."

The data is the same for both Data Streams. Both individuals in the conversation have traveled 100 miles to Los Angeles. Because the Data is the same and the number of sample is the same – they are the same age, the Real Impact of their Data is the same. Obviously the response from the two individuals was quite different. Why? Anyone can sense that Marsha had traveled a lot, while Andrianna had traveled very little. It doesn't take a nuclear scientist to see what's going on. Everyone knows the difference between 'the same old thing' and something 'brand new'.

We will describe the phenomenon in the language we have developed. Each lady has the same Range of Possibility, the World, itself. However their Realm of Probability was quite different. The Realm is determined by the Average and the Deviation. Their average location was presumably the same, Santa Barbara. Their average Deviation was however, quite different. Andrianna's Realm of Probability was probably about 10 by 20 miles. Marsha's Realm of Probability could easily be 300 miles in any direction if she's any kind of traveler. Thus Marsha's trip to Los Angeles is well within her Realm of Probability, while Andrianna's trip is far outside of hers. It is exciting to cross outside the Realm and life as usual to stay within the Realm. It is easy to understand intuitively why this causes so much excitement for Andrianna and so little for Marsha. {We'll explore the technical reasons for this in the Boredom Principle Notebook.}

Our Data Streams' Averages and Ranges are the same while their Deviations are drastically different. We introduce the term Proportional Impact to reflect the notion that the personal impact of new data upon the organism is related to the ratio of the Real Impact of the Data on the Old Average to its Deviation from the Mean. Hence on the intuitive level, we might say, "Of course the trip to Los Angeles was more exciting for Andrianna, she's never been anywhere. That's a boring trip for Marsha. She's frequently travels much further than Los Angeles." This simple statement expresses the idea of Proportional Impact. The impact is proportional to the Deviation. Andrianna has exceeded her Realm of Probability by 5 times, while for Marsha, a trip to Los Angeles is just one third of the way to the border of her Realm. Hence in idealistic terms the personal impact on Andrianna would be 15 times greater than on Marsha. These words are just a way of introducing concepts in a more familiar way, the way they were discovered.

Although in the preceding discussion we have referred to this type of impact as Proportional Impact, in the discussions that follow we will call it, the Impact. This is the impact that will be most focused upon in our empirical studies. In technical studies, later on, potential and real impact will be studied but will be identified with their adjectives. The technical discussion will now follow.

The measure of how far the Data Byte is from the Mean of a Number Set in terms of Standard Deviations of the Set is called its Z Score. In statistical language, the Z Score equals the ratio of the difference between the Data and the Average of the Set and the Standard Deviation of the Set. Because we are dealing with Live Data Streams, we will call the Z score of the latest Data Byte, its Impact upon the Stream. Therefore in Data Stream language, the Impact equals the ratio of the difference between the New Data Bit and the Old Average of the Data Stream and the Old Standard Deviation. Symbolically this is indicated as follows.

There is nothing new here from traditional Z Scores, except the subscripts, which just reflect the growing nature of Data Streams. The Average and Standard Deviation describe the Old Data Stream. The Data Bit is the newest entry into the Box. The impact measures how many Standard Deviations away from the Old Mean that the new Data Byte is. {See Decaying Averages Notebook, also Live & Dead Data Sets Notebook.}

Remember that our Data can be fluffy. Therefore our Data Stream can also be fluffy. Because of this fluffiness, individual changes are indeterminate, i.e. within random levels. We must look to the general Flow Density to determine change. If our Data Bytes consistently have an Impact less than 3, then we are not sure that anything has really happened. Transitory changes could easily be a matter of chance. If, however, the Impact of the newest Data Bytes are consistently over 3 then Change is probably occurring on some level. The Flow Density will change. We need to see a pattern of change for the Flow Density to change, not just an individual aberration. (As an aside, these statements also apply to non-fluffy, Hard Data Sets because their Standard Deviations are so close to zero, that any variation beyond these limits would also indicate a change in momentum.)