Under materialism, goodness is purely a human construct, and the universe itself is amoral and indifferent. The universe has no concept of good; only people possess this concept.
But, in order for Friendly AI to be possible, there must exist a “goodness function:” a piece of computer code which says of any arbitrary configuration of atoms how good it is. This goodness function must be the single correct goodness function. It must be a fact that the goodness of things is measured by this function. Otherwise it would be immoral to run an AI which used this function as its utility function.
Furthermore, according to my previous post, there is no analogous “truth function:” that is, a computable function which says what things are true. So we must maintain that there is no truth function, but there is a goodness function. This is counterintuitive.
Let us look at the picture of the world which we get under the theory that Friendly AI is possible. The universe has no concept of goodness; goodness is a purely human concept. But there is a single correct concept of goodness, which (unlike truth) can be measured by a computable function. Evolution coincidentally resulted in the existence of creatures whose nervous systems operated in such a way as to arrange matter into configurations that were good. The laws of physics simply happened to drive the matter into states that were good, for reasons having nothing to do with the fact that those states were good.
This is a little ridiculous. Now let us recall that under materialism, goodness is a human concept. It is a feature of our nervous systems, and a product of natural selection. It is unreasonable to expect there to be a single rigorous formulation of this concept; rather, we can expect that it has no coherent definition, and that what constitutes goodness varies from person to person. It is impossible to quantify goodness in a computable function, because goodness is not that kind of concept.
Goodness is not written into the universe; it is written into human nervous systems. And human nervous systems are not constructed in a mathematically perfect fashion which gives rise to one true morality with an unambiguous definition.
This in turn means that an AI could never do “the right thing,” because there is no such thing as “the right thing.” There are many mutually incompatible “right things.” We are proposing to build a computer program which can do anything, and the problem is getting it to do what we want. But “what we want” has no coherent definition. This means that no utility function can be the right utility function. Whatever the AI does, it will be wrong by some people.