Intelligence Modeling

It should be emphasized that within a system of interest (whether animal or machine), the sources and receivers characterized by information theory are the elements of a neural-system that is itself a cybernetic entity. The brain is a cybernetic entity in as much as the information being processed governs the interaction between the sending and receiving parts. These parts then act together to provide homeostatic or intentional observable behaviors of the neural-system itself. In IM, this flow of information is given canonical form in the “cybernetic cycle” (Bergethon, 2009). Furthermore, these information flows are in fact models (or abstractions) of some sender state (i.e., the state of the external world or some aspect of the internal milieu) so the formalisms of cybernetic-communication-system theories are a natural fit for neurological investigations.

The IM paradigm: Here the process of intelligence modeling is stated.

Step 1: Describe, Graph and Map the behavior

  • Step 1a:  Describe the behavior of interest (systems description).
    • Observable properties are selected to represent state of system.
  • Step 1b: Draw a geometrical representation of the description.
    • The pattern of state-transitions is mapped, often onto a topological surface. A topological surface (called a manifold) can suggest canonical equations possessing coefficients that can be assigned mechanistic meaning.
  • Step 1c: Write the canonical form of the geometry (line, plane, sphere, step) in mathematical form. Ideally, this geometry can be treated as a potential energy surface or some other experimentally accessible variable  (see step 2).

Step 2: Propose plausible mechanisms that meet requirements of the canonical equations or form

  • Choose constraints for the system and apply these to limit the solutions to the canonical equations. Constraint conventions are used that limit the choices of how a particular problem can be solved.  The problem is “two-way constrained”:
    • Bottom Up Constraints – this determination is semi-empirical and is derived from the actual system under study. This is a point at which IM deviates from general system theory, where the ideal is to find fundamental principles derived solely from the system arrangement and are independent of the actual substrate.
      • Biological constraints, e.g. – Cells (morphology), anatomy (connectivity), physiology (functional limitations, e.g. conduction velocity)
      • Physiochemical constraints, e.g. – energetics, kinetics, dynamic behavior
        • A dominant and universal constraint applied in IM is the extraordinarily limited energy availability to biological neural systems. This is a frank recognition of the role played by evolutionary forces in shaping the system. For example the total energy limit for the human brain is 25 watts and only 12-13 watts is available for computational processes. A key idea is that information fidelity and choice are potentially costly in both energy and time.  Biological information systems ultimately occupy states that minimize free energy  (∆Gmin) with G = E – TS.  G represents the balance between internal energy cost and physical entropy and can be written with respect to the Shannon entropy as well.  Systems with high information fidelity have low physical entropy system states and therefore energy is consumed to achieve these states. Thus it is natural to find mechanistic solutions that locate both 1) regions of maxima or minima of in the potential field (points on the manifold) and 2) paths between information states in which a minimum of Shannon entropy (and its energy equivalent) and a maximal negative potential energy  change result. This is quantified in the measure of “cybernetic action”. In general, IM will attempt to write the mechanism in terms that can be measured by state changes in the energy of the system under study.
    • Top Down Constraints –these are mostly consistent with the general system, cybernetic and information theory constraints of how a system is assembled and behaves. For example, if a single set of data gives a two valued observable output, this imposes certain restrictions on mechanistic solutions because not all operations can deliver this solution.

Step 3: Propose testable hypotheses to test the causal/mechanistic model developed in step 2.   Because of the energy constraint in step 2, experiments that reflect energy utilization (glucose, oxygen, or blood flow in cases of neurovascular coupling) can provide important insight for evaluating theoretical predictions.

Step 4:  The mechanistic models developed are validated with experimental data. The experiments are inevitably reductionist, but for most “high-order” behaviors, there will be strong systems interactions making reductionist analysis complicated. Therefore there will be a tension between this tendency to oversimplify the model at the experimental level and the theoretical “goodness” of the model. In order to strengthen the convergence between the observational and mechanistic models, two principles are applied:

  • The 2001 Constraint. This whimsical term refers to the computational psychosis suffered by the HAL series computer in Arthur C. Clarke’s 2001: A Space Odyssey.  A useful computational model, when actually constructed, should be able to demonstrate the same variety of pathologies that the human nervous system suffers.  In IM this is considered an essential test of the “goodness of a model”.
  • The Cognitive Correspondence Principle: Named after the inspiration of Neils Bohr in tying quantum physics to Newtonian physics, this principle recognizes the mathematical hierarchy required in modeling complex systems and need to make simplifying assumptions.  It states that each assumption must, in the limit, be found to be a valid truncation of a more precise mathematical description of the overall system.

Step 5: The final step and most important step is the replication of the behavior of interest by building a device or machined version of the knowledge mechanism that generates an emergent property equivalent to the original real behavior. Ideally a similar information processing mechanism using realistic constraints is used. This step should be clearly differentiated from the “black box’ approach of artificial intelligence workers.

Techniques used in the laboratory to parameterize the modeling equations include fMRI , cortical near infrared spectroscopy and electroencephalography in humans and animals; computer modeling and bionic simulation; and bioelectrochemical studies of model membranes. We have also used the IM approach to develop models and understand science phobia and misconception and model the cognitive-emotional behaviors in autism and genius. IM derives from the quantitative modeling methodologies used successfully in the physical sciences, especially in thermodynamics and physical chemistry. We approach solving the intelligence modeling equations as a series of partial differential equations in terms of ∆G, which tightly connects the thermodynamics and energetics of cybernetic processing to potential solutions in the space of possibilities for the neural system.