Archive for category AI

Goodness and AI

Under materialism, goodness is purely a human construct, and the universe itself is amoral and indifferent. The universe has no concept of good; only people possess this concept.

But, in order for Friendly AI to be possible, there must exist a “goodness function:” a piece of computer code which says of any arbitrary configuration of atoms how good it is. This goodness function must be the single correct goodness function. It must be a fact that the goodness of things is measured by this function. Otherwise it would be immoral to run an AI which used this function as its utility function.

Furthermore, according to my previous post, there is no analogous “truth function:” that is, a computable function which says what things are true. So we must maintain that there is no truth function, but there is a goodness function. This is counterintuitive.

Let us look at the picture of the world which we get under the theory that Friendly AI is possible. The universe has no concept of goodness; goodness is a purely human concept. But there is a single correct concept of goodness, which (unlike truth) can be measured by a computable function. Evolution coincidentally resulted in the existence of creatures whose nervous systems operated in such a way as to arrange matter into configurations that were good. The laws of physics simply happened to drive the matter into states that were good, for reasons having nothing to do with the fact that those states were good.

This is a little ridiculous. Now let us recall that under materialism, goodness is a human concept. It is a feature of our nervous systems, and a product of natural selection. It is unreasonable to expect there to be a single rigorous formulation of this concept; rather, we can expect that it has no coherent definition, and that what constitutes goodness varies from person to person. It is impossible to quantify goodness in a computable function, because goodness is not that kind of concept.

Goodness is not written into the universe; it is written into human nervous systems. And human nervous systems are not constructed in a mathematically perfect fashion which gives rise to one true morality with an unambiguous definition.

This in turn means that an AI could never do “the right thing,” because there is no such thing as “the right thing.” There are many mutually incompatible “right things.” We are proposing to build a computer program which can do anything, and the problem is getting it to do what we want. But “what we want” has no coherent definition. This means that no utility function can be the right utility function. Whatever the AI does, it will be wrong by some people.

1 Comment

Truth and AI

Let us consider an AI programmed to find new mathematical truths. Its utility function — call it T — says, of an arbitrary mathematical statement, whether or not it is a new mathematical truth. This involves two steps:

1. Saying whether or not the statement is true.
2. Saying whether or not the statement is new.

(2) is straightforward to implement. We simply need to maintain a database of known mathematical theorems, and check statements against this database. What about (1)?

The criterion for mathematical truth is the existence of a logically valid proof proceeding from axioms to the statement in question. So T will be based on this criterion.

But we can’t write T. This is true because of Godel’s incompleteness theorem. Godel’s incompleteness theorem implies that there is no consistent set of axioms sufficient to imply all mathematical truths. For every consistent set of axioms, there are statements that are true and not provable by those axioms.

In order to write T, we would need to specify the set of axioms that it is allowed to use. In other words, we would need to specify the set of true axioms — the set of axioms which implies all true theorems. But there is no such set.

We would not necessarily need to write out all of the axioms one by one, and give them as constants in the computer program. At a minimum, we would need to be able to write a function which could say, of an arbitrary mathematical statement, whether or not it was one of the true axioms. But we cannot do this, because there is no such thing as the set of true axioms.

We can write a utility function optimizing for “new mathematical theorems provable under axiom set Y.” For instance, perhaps Y = ZFC. But we cannot write a utility function optimizing for “new mathematical truths.” In other words, we cannot write T.

This has implications not only for mathematical truth in AI, but for truth in AI. Mathematical truth is part of truth in general. So we cannot write computer code that means “truth” without at some point writing computer code that means “mathematical truth.” And we just learned that that code is unwriteable.

This means that we cannot write an AI which includes “truth” in its goal system. It is hard to imagine writing a Friendly AI which doesn’t have truth in its goal system. For instance, how do you write “make yourself more intelligent” without ever referring to truth? Can you define “intelligence” without any reference to truth?

Suppose that we bite the bullet, and build an AI which doesn’t have truth anywhere in its goal system. Instead of “mathematical truth,” its goal system contains “provability under ZFC.” This AI would not know any mathematical facts not provable under ZFC. Could it still go superintelligent? And a related question: would it ever be able to realize that it needed to know about something called “truth?”

2 Comments

Harmonic dissonance in 12-TET

I made this post in LaTeX, because it involves complicated math equations. Here it is.

Leave a comment

Harmonic Consonance/Dissonance: Just Intonation

In this post I describe a metric for measuring harmonic consonance in the music AI. (Actually, the measure is of dissonance. We think of maximum consonance as zero dissonance.)

The assumption is that what we perceive as consonance and dissonance has simple mathematical roots. This is easy to believe. If one looks at the waveforms of consonant versus dissonant sounds, they differ in a consistent way. Furthermore, we know that the dissonant intervals have more complex ratios than the consonant intervals. All of this suggests the possibility of mathematically quantifying consonance and dissonance. I shall give an equation which I think does this.

That said, I do not think that this measure perfectly equals what humans perceive as consonance/dissonance. I think it lines up quite well; but I imagine that there are little quirks of our psychology which cause our perceptions to be at variance from the mathematical idealization of consonance/dissonance. In particular, I think that our social conventions surrounding music affect our perceptions of consonance/dissonance.

For instance, the model I give makes the diminished fifth significantly less dissonant than I would have expected it to be relative to the other intervals. I think that this is because Western harmony is constructed such that it almost never uses the diminished fifth, and so our social conventions depart from the mathematical idealization in this case.

Dissonance always involves “beating:” the interaction of frequencies which results in an irregular pattern of vibration. We can quantify the amount of beating by measuring the amount of time it takes for the pattern of vibration to repeat itself. We can measure this by taking the least common multiple of the wavelengths of the notes involved.

So suppose that there are n notes playing at a given moment, which have wavelengths w1, w2, …, wn. Then the dissonance D is, as a first approximation:

D = LCM(w1, w2, …, wn).

In order for this formula to work, the wavelengths must be rational numbers. We use just intonation for this purpose. The wavelengths across the piano keyboard are:

C0 = 1
Db0 = 15/16
D0 = 8/9
Eb0 = 5/6
E0 = 4/5
F0 = 3/4
Gb0 = 5/7
G0 = 2/3
Ab0 = 5/8
A0 = 3/5
Bb0 = 4/7
B0 = 8/15
C1 = 1/2
Db1 = 15/32
D1 = 4/9
Eb1 = 5/12
E1 = 2/5
F1 = 3/8
Gb1 = 5/14
G1 = 1/3
Ab1 = 5/16
A1 = 3/10
Bb1 = 2/7
B1 = 4/15
C2 = 1/4
etc.

This formula has the advantage that it results in lower notes contributing more dissonance than higher notes. This aligns with practical experience; a major third between C0 and E0 is much more dissonant than a major third between C5 and E5.

The first problem with this formula is that it does not take account of the fact that different notes may have different volumes. This is simple enough to fix. Let v1, v2, …, vn be the volumes of the different notes. Then:

D = root(v1 * v2 * … * vn, n) * LCM(w1, w2, …, wn)

(To be clear, root(a, b) is the bth root of a.)

There is still a problem with this formula, which is that it assumes that the instrument is tuned in just intonation. In fact it is tuned in 12-TET. This means that intervals such as C5-G5 are more dissonant than this formula predicts, and intervals such as C#5-F#5 are less dissonant. The latter type of problem is more serious, since with this formula, the AI would think that it could not safely use the interval C#5-F#5, whereas in fact it can.

And that is about where I am at. I don’t yet know how to translate this formula to work with 12-TET. It can’t be the same formula, because the least common multiple operation only works on rational numbers, and the wavelengths in 12-TET are irrational numbers. So my next task is to adapt this formula to 12-TET.

Leave a comment

Concept Language

In this post I describe the concept language of the music AI. It is based on fuzzy first-order logic.

It has two data types: phrases, and floating point numbers.

Phrases are of two types: notes, and complex phrases. Complex phrases are phrases consisting of a set of sub-phrases.

Notes have three properties: pitch (0 to 87, for the 88 piano keys), velocity (0 to 127 — the MIDI velocity values), and duration (in ticks — some unit of time).

Complex phrases are sequences (the order is arbitrary) of phrases. Each phrase is coupled with an offset value, saying how many ticks away from the start the phrase is.

A truth value is a number between 0.0 (false) and 1.0 (true).

The following primitive constants are defined:
* Zero.
* The empty complex phrase (nil).

The following primitive functions are defined (and possibly others that prove useful):

* Successor, addition, subtraction, multiplication, division, round down.
* Note pitch, note velocity, note duration.
* Number of sub-phrases in a complex phrase.
* Duration in ticks of a complex phrase.
* Make note with a given pitch, velocity, and duration.
* Non-destructively add sub-phrase to complex phrase. (cons)

The following operations (from the definition of primitive recursive functions) can be used to make new functions:

* The projection functions.
* Composition.
* Primitive recursion.

The following primitive predicates are defined (and possibly others that prove useful):

* Is a number? Is an integer? Is a phrase? Is a note? Is a complex phrase?
* Less than, greater than, equals, less than or equal, greater than or equal.
* Number is in range of possible note pitches.
* Number is in range of possible note velocities.

The following operations can be used to make new predicates:

* Not P, P and Q, P or Q, P xor Q, P only if Q, P iff Q.
* Truth value of P is {equal to, less than, greater than, less or equal, greater or equal} N.
* Given a number 0.0 <= N <= 1.0, yield truth-value N.
* Universal and existential quantifiers over the members of a specified complex phrase. (The quantifier creates variables for both the sub-phrase and its offset.)
* Universal and existential quantifiers over a specified range of integers.

Predicates can take data parameters, and substitute them in the body of the predicate. Predicates can invoke other predicates. There cannot be circular dependencies in the invocation graph; direct or indirect recursion is not allowed.

This language is primitive recursive, rather than recursive. This means that it is impossible to write functions or predicates that take infinite time to compute.

Leave a comment

Music AI

I am thinking of writing an artificial intelligence to compose music. Here are some design ideas.

The idea is that he starts with nothing but an algorithm, generates music, and then learns from his own experiences to become better at generating music. He follows the same process that the history of music followed: semi-random experimentation guided by experience and evolving over time.

For simplicity, he will write piano pieces. There needs to be a representation for arbitrary-sized phrases of music, along the lines of MIDI.

There will be a “corpus” of already generated phrases which were judged to be good. The corpus is referenced during the process of composition, and changed by the process of composition.

There will be “concepts,” which describe properties of phrases. For instance, “major third” could be a concept; as could “accelerando.”

Concepts will be predicates in first-order fuzzy logic, which a given phrase can satisfy or not satisfy. There will be built-in primitive concepts (e.g., “C5,” “velocity 50”), and it will be possible for the AI to think of arbitrary concepts using a logical language.

There will be a “language,” which is the set of concepts that are applied in composition. The language evolves over time.

Phrases will be judged as good or bad according to a judgment metric. The metric which I am currently thinking of has four components: consonance, novelty, richness, and unity.

Consonance is a measure of the ground-level properties of music which make it pleasant or jarring as we perceive it. The most obvious component of this is harmonic (y-axis) consonance. I also want to define something like consonance for the rhythmic (x-axis) and melodic (x+y-axis) properties of music.

Novelty is a measure of how similar the phrase is to other phrases (of similar length) which exist in the corpus. Similarity is measured by conceptual closeness. Less similarity is better. The novelty of the sub-phrases is part of this measure.

Richness is a measure of how many concepts the phrase satisfies. More is better. The richness of the sub-phrases is part of this measure.

Unity is some measure of how well the sub-phrases fit together. I don’t know what that will look like.

So there is one fixed metric (consonance), and three metrics (novelty, richness, and unity) which are dependent upon the corpus and the language, and so evolve over time.

He generates phrases by some as yet unknown means, drawing on the language and the corpus, and incorporating randomness. He judges the generated phrases, and puts the good ones into the corpus.

The corpus shall have a limited size, and periodically the worst phrases will be culled from it.

New concepts will also periodically be generated, by some as yet unknown means. The simplest thing would be to make random variations on the concepts in the language and judge them — i.e., a genetic algorithm.

Concepts also have a judgment metric. The language, like the corpus, has a limited size, and periodically has the worst concepts culled from it. The judgment metric I am thinking of right now has three components: simplicity, usefulness, and distinctness.

Simplicity is a measure of how many moving parts the concept has. If a concept incorporates other concepts into its definition, the complexity of those concepts is not part of the concept’s complexity.

Usefulness is a measure of how many times the concept appears in the corpus, and how many times another concept uses it in its definition. More is better.

Distinctness is a measure of how different the concept is from every other concept in the language. It is measured by statistical anti-correlation with other concepts in the corpus.

When he runs, the process will look like this. Generate phrases; update the corpus. Generate concepts; update the language. Repeat forever.

Leave a comment