This is a follow-up to my last week’s post – **Notes on Regulation:** In today’s post, I am looking at the Arvid Aulin-Ahmavaara’s extended form of the law of requisite variety (using Francis Heylighen’s version). As I have noted previously, Ross Ashby, the great mind and pioneer of Cybernetics came up with the law of requisite variety (LRV). The law can be stated as ** only variety can absorb variety.** Here variety is the number of possible states available for a system. This is equivalent to statistical entropy. For example, a coin can be shown to have a variety of two – Heads and Tails. Thus, if a user wants a way to randomly choose one of two outcomes, the coin can be used. The user can toss the coin to randomly choose one of two options. However, if the user has 6 choices, they cannot use the coin to randomly choose one of six outcomes efficiently. In this case, a six-sided die can be used. A six-sided die has a variety of six. This is a simple explanation of variety absorbing variety.

The controller can find ways to amplify variety to still meet the external variety thrown upon the system. Let’s take the example of the coin and six choices again. It is possible for the user to toss the coin three times or use three coins, and use the three coin-toss results to make a choice (the variety for three coin-tosses is 8). This is a means to amplify variety in order to acquire requisite variety. From a cybernetics standpoint, the goal of regulation is to ensure that the external disturbances do not reach the essential variables. The essential variables are important for a system’s viability. If we take the example of an animal, some of the essential variables are the blood pressure, body temperature etc. The essential variables must be kept within a specific range to ensure that the animal continues to survive. The external disturbances are denoted by D, the essential variables by E and the actions available to the regulator as A. As noted, variety is expressed as statistical entropy for the variable. As Aulin-Ahmavaara notes – *If A is a variable of any kind, the entropy H(A) is a measure of its variety.*

With this background, we can note the extended form of the Law of Requisite Variety as:

**H(E) ****≥ H(D) – H(A) + H(A|D) – B**

The H portions of the term represents the statistical entropy for the term. For example, H(E) is the statistical entropy for the essential variables. The larger the value for H, the more the uncertainty around the variable. The goal for the controller is to keep the H(E) as low as possible since a larger value for the entropy for the essential variables indicate a larger range of values for the essential variables. If the essential variables are not kept to a small range of values, the viability of the organism is compromised. We can now look at the other terms of the equation and see how the value for H(E) can be maintained at a lower value.

Heylighen notes:

*This means that H(E) should preferably be kept as small as possible. In other words, any deviations from the ideal values must be efficiently suppressed by the control mechanism. The inequality expresses a lower bound for H(E): it cannot be smaller than the sum on the right-hand side. That means that if we want to make H(E) smaller, we must try to make the right-hand side of the inequality smaller. This side consists of four terms, expressing respectively the variety of disturbances H(D), the variety of compensatory actions H(A), the lack of requisite knowledge H(A|D) and the buffering capability B.*

As noted, D represents the external disturbances, and H(D) is the variety of disturbances coming in. If H(D) is large, then it also increases the value generally for H(E). Thus, an organism in a complex environment is more likely to face some adversities that might drive the essential variables outside the safe range. For example, ** you are less likely to die while sitting in your armchair than while trekking through the Amazonian rain forest or wandering through the concrete jungle of a megacity.** A good rule of thumb for survivability would be to avoid environments that have a larger variety for disturbances.

The term H(A) represents the variety of actions available to counter the disturbances. The more variety you have for your actions, the more likely you are able to counteract the disturbances. *At least one of them will be able to solve the problem, escape the danger, or restore you to a safe, healthy state. Thus, the Amazonian jungle may not be so dangerous for an explorer having a gun to shoot dangerous animals, medicines to treat disease or snakebite, filters to purify water, and the physical condition to run fast or climb in trees if threatened. The term H(A) enters the inequality with a minus (–) sign, because a wider range of actions allows you to maintain a smaller range of deviations in the essential variables H(E).*

The term H(A|D) represents a conditional state. It is also called the lack of requisite knowledge. It has a plus sign since it indicates a “lack”. It is not enough that you have a wide range of actions, you have to know which action will be effective. If you have minimal knowledge, then your best strategy is to try out each action at random, and this is highly inefficient and ineffective if time is not on your side. *For example, there is little use in having a variety of antidotes for different types of snakebites if you do not know which snake bit you. H(A|D) expresses your uncertainty about performing an action A (e.g., taking a particular antidote) for a given disturbance D (e.g., being bitten by a particular snake). The larger your uncertainty, the larger the probability that you would choose a wrong action, and thus fail to reduce the deviation H(E). Therefore, this term has a “+” sign in the inequality: more uncertainty (= less knowledge) produces more potentially lethal variation in your essential variables.*

The final term B stands for buffering (passive regulation). *It expresses your amount of protective reserves or buffering capacity. Better even than applying the right antidote after a snake bite is to wear protective clothing thick enough to stop any snake poison from entering your blood stream. The term is negative because higher capacity means less deviation in the essential variables.*

*The law of requisite variety expresses in an abstract form what is needed for an organism to prevent or repair the damage caused by disturbances. If this regulation is insufficient, damage will accumulate, including damage to the regulation mechanisms themselves. This produces an acceleration in the accumulation of damage, because more damage implies less prevention or repair of further damage, and therefore a higher rate of additional damage.*

The optimal formation for the Law of Requisite Variety occurs when the minimum value for H(E) is achieved, and when there is no lack of requisite knowledge. ** The essence of regulation is that disturbances happen all the time, but that their effects are neutralized before they have irreparably damaged the organism.** This optimal result of regulation is represented as:

**H(E) _{min} **

**= H(D) – H(A) – B**

I encourage the reader to check out my previous posts on the LRV.

**Getting Out of the Dark Room – Staying Curious:**

Please maintain social distance and wear masks. Stay safe and Always keep on learning…

In case you missed it, my last post was** Notes on Regulation:**

References:

[1] Cybernetic Principles of Aging and Rejuvenation: the buffering-challenging strategy for life extension – Francis Heylighen

[2] The Law of Requisite Hierarchy – A. Y. Aulin-Ahmavaara

Reblogged this on Systems Community of Inquiry.

LikeLike

Thanks. I’ve been looking for the connection between LRV and Bayes Theorem (https://en.wikipedia.org/wiki/Bayes%27_theorem) . You wrote: “The term H(A|D) represents a conditional state”. Using Bayes I get H(A|D) = H(D|A)*H(A)/H(D)

So H(E) >= H(D) – (1 – H(D|A)/H(D) )*H(A) – B.

H(D|A) being the likelihood of Action A happening given the disturbance D, means that one happens to do the correct(ive) action spontaneously (“no amount of planning can compensate for sheer luck”, as I learned in logistical planning).

Am I correct in assuming that H(D|A)/H(D) is being “absorbed” in B, the buffer, as your H(E)min = H(D) – H(A) – B suggests? The “requisite knowledge” has then been “absorbed” by B. Can I then see B also as a “Markov Blanket”. (https://en.wikipedia.org/wiki/Markov_blanket) or Markov-boundary? Implying that over time H(E) organizes it-self (or should I say “one organizes one-self”), learning to absorb its own correct(ive) actions.

LikeLiked by 1 person

Interesting thoughts! Thank you. The H represents the entropy value (summation) and thus I am not sure we can transpose to a probability equation directly. One form of “B” is certainly a thick boundary, which could be viewed as a Markov blanket (e.g. a cell’s boundary). But there are other types of passive regulation such as any governing constraints. Passive regulation applies at all times and does not make any selection with regards to what type of action needs to be taken.

Your comments give me more food thought!

The idea of a Bayesian brain goes pretty well with cybernetics. It is something that the Free Energy Community has been looking at (Karl Friston, Anil Seth etc.)

Thank you,

Harish

LikeLiked by 1 person

You’re welcome. I’ve always considered a law like Law the Requisite Variety as a “natural” law: it maintains itself, there’s no (need for) Ashby agents (in Dutch, an “agent” is also a police man). When I encounter Ashby’s Law, I was studying Biophysics, as it nicely explained the “variety” we encounter in living organisms.

Requisite Variety works like a paradox; after all, …. varying, requiring varying, requiring … is an infinite re/progress and therefore the sure sign of a paradoxical pair. The (man made) fiction of “requisite variety” opposes both itself and reality, and is therefore – following Hans Vaihinger, The Philosophy of “As-If”- a useful concept.

And I think that the idea of a “regulator” “controlling” a system (body) describes just a fraction of the actual situation, as a consequence of using language to describe “cybernetics”. (y)Our theory or model-in-use, makes you “see” the facts. Almost everything organically organizes itself.

As you wrote that H represents Entropy (another paradoxical concept, “order out of chaos”) I assume it’s an equivalent of the Second Law of Thermodynamics, complexity disguises as entropy.

Brains – an organ with which you think you think – organize themselves, structurally coupled to the domain specified by their body and nervous system (Varela), structurally coupled to the domains specifying their existence, … . Brains consisting of structurally coupled self-contained, self structurally coupled areas of neurons, … . Their cycle times keeping them apart. (The word “time” has been derived from *dā-, Proto-Indo-European root meaning “to divide.”)

A Markov blanket would nicely explain the fact that our brain “hides” itself, with LRV required – by us – to explain both its variety and the way it “predicts” the future.

LikeLike