Welcome to the Frontpage
Generic Considerations Associated with the Formation of Software Systems PDF Print E-mail
Written by Administrator   
Friday, 26 September 2008 00:49

In deciding to make packages for Debian, we are relying on the following maxim:

Genus nunquam peruit - Generic things do not perish

There are already around 22,000 Debian packages. This principle is stated to identify the naive tendency to complexify through nonstandard additions to a system. The result is a nonstandard system which cannot be easily recreated, which slighty increases the complexity at a cost of great disorder. We are looking for highly complex, highly ordered systems.

The advocated method is to cleanly identify a system and package it in a generic way. That is, to make additions to the set of applications not modifications to the operating system.

The value of this method is that we can therefore keep things very clean, that we conform to de facto et jour standards (ok, ok, de facto and de jour), that interfaces are implemented wholly and cleanly. There are other important properties of a system in addition to complexity, such as completeness, soundness, consistency, etc.

So, imagine that we suppose a measure of a certain property, such as soundness. Let us further imagine that the range is a lattice wrt the ordering of cleanliness
 
Artificial Intelligence PDF Print E-mail
Written by Administrator   
Friday, 26 September 2008 00:19

The AI I work on has nothing to do with human intelligence, or trying to make a sentient being. If you make that assumption about me, that's like hearing that someone is a religious person and then chastising them for believing in Buddha (when in fact they might be Christian). There are many aspects to the field of AI. Probably the best way for you to think of this is that I work on IA, or intelligent agents. These are programs which are capable of solving problems that are considered useful by people.

AI concerns itself with minimizing unnecessary psychological suffering due to avoidable accidents. We use techniques from logic, the same techniques that, for instance, make optimal chess playing programs, etc, and treat the world as a game and try to win the game, by mathematical optimizations and proofs that bad things don't happen. You can do this, you can make certain guarantees that certain failures don't happen for reasons that you are in control of - of course no one can yet control certain things, but at least you do the best you know how to do with what you have all the time. For instance, you won't make bad moves at chess, you won't not know the definition of a certain word, etc.

Anyone who thinks AI is about taking control away from people doesn't really realize that a person and an AI together are still going to face difficult problems, it will make us better able to avoid certain common, unnecessary mistakes, and to help us mount higher challenges by providing rigour in situations where it wasn't previously available. It will not make us lazier, it will in fact make us harder working because we won't be burned and worn out and feel helpless towards mistakes we know ought not happen.

People are good at learning complex systems but sacrifice perfect memory in exchange for other skills suited to learning complex systems, but some problems require specialized solutions, and this solution is the computer.

We can look at a world with AI in the same way we look at the current world, and we can introduce AI in a way such that no requirements any person may have are rendered invalid by the AI. That is the goal, actually, to satisfy everyone's requirements. We model requirements in the usual logical way.

A requirement is something that a person could express in English, and is represented by association to the intended semantics of the statement (as opposed to misreads of the requirement). The goal then is for the system to make as many of these requirements simulataneously true based on its laws (the requirements of all people) and its algorithm.

algorithm n : a precise rule (or set of rules) specifying how to solve some problem [syn: {algorithmic rule}, {algorithmic program}]

When I say that AI solves problems, it is in the sense of problems of logic. However, many real world problems may be solved by reducing them to problems of logic and the interpretting the solution.

Let me describe to you in perhaps the clearest way possible the goal of what I work on.

I realize that EVERY FAILED DEDUCTION LEADS TO FUTURE ACCIDENTS.

Let me describe an accident. This is by no means pleasant. So if the accident I am about to describe were avoidable, it would make the world more pleasant.

This accident involves a young boy being hit by a train. I know that what I work on would be capable of preventing his death and the consequent psychological suffering of all that knew him.

While we cannot protect one hundred percent of people against one hundred percent of the hazards and vissicitudes of life, we can make relative improvements. We can in a formal sense make 100% safety in restricted systems, and of course the goal is to grow that safety to larger and larger systems. It is exactly the consistency of computers, which don't make them rigid thinkers, but which make them conditional and highly reliable system, that allows computers to solve with 100% completeness subclasses of existing problems. That is, prevent whole categories of accidents.

The accident involving the boy follows a template that I have seen multiple times in videos of accidents, and even experienced myself. In a different video, the exact same failure situation leads to the death of a gazelle.

This failure situation is one that I am aware now and which to some extent motivates unswerving commitment to the completion of my task.

Since it will be easier to describe the gazelle scenario, I will start with that. The typical feature is that it is two animals, two deer in this case, and a threat which is perceived by one of the animals, which steps clear without thinking to alert the other. In this case, the deer are drinking from a water hole. The first deer is blocking the view of the second deer of the water hole. The first deer sees something approaching at it (an alligator), and quietly turns around and leaves the water. From the point of view of the second dear, the movement of the first deer conceals the movement of the alligator, which proceeds to jump up and trap and kill the second deer.

I am very offended by the existence of such scenarios in nature. I think they should be stopped. I understand methods that would prevent this and a great many other classes of accidents.

Now we will describe the death of the boy, which parallels closely the death of the deer. Two young boys are walking towards a train track. There is a stationary train on the track which conceals an oncoming train. A train horn is blowing, and the boys fatally presume it is the stationary train. From the point of view of the camera, as the boys near the tracks we see that the second train moving at about 80 MPH is the real source of the horn sound. The boys are walking side by side, let us call the first boy the boy who is closer to the train. The first boy blocks the view of the second boy. When the train is now within about 80 feet the first boy perceives it and stops on the very edge of track. He turns around and steps back automatically (clearly is not thinking at all about the second boy), barely missing the train. The second boy is now about 4 feet in front of the first boy, and his head may be seen to hesitate for about a 6th of second, an short moment of psychological confusion before his hopelessly lost position is impacted by the approaching train. Needless to say the child's body is instantly accelerated and rockets about three hundred feet in a second actually hitting the cameraman at a velocity of roughly 75 MPH.

This death may have been prevented as a matter of logical completeness. All that is needing is a facility to alert the child to avoid that position. A deduction was necessary which would have triggered a communication that the child should stop, we in advance of the precipitious accident. I understand how to implement this, and how to set up the correspondances between the mind and the system.

It is in this spirit that I pursue the development of AI, a system which I know can prevent large classes of accidents. It is sufficient to prevent an accident to have a proof that it will not happen. Here we take proof in the formal mathematical sense. By establishing correspondances between the physical world and the simulated computer world, simulation occuring with knowledge based systems, etc, (which is more akin to a very precise, somewhat accurate human mental model than to say a simple physics simulation), correspondences that take place through input output devices like mind to machine interfaces, cameras, etc., we will be able to eliminate large ammounts of psychological suffering as the result of accidents that are preventable by these systems.

To give a simple account of a simple implementation, suppose that we have image understanding routines. <Show video of VSAM> The camera agent, an IA (intelligent agent) with image understanding intelligent software, would be creating and forwarding live information to an AI, perhaps in the camera, perhaps at a nearby station, across a wireless adhoc network, etc. The information would be complex, but in the spirit of the existing technology, would look about equal to a length novel worth of statements of the following form.

(#$approaching #$OurPoorFriendForWhomWeGrieve #$railtrack-39423) (#$approaching #$OurUnfortunateSurvivor #$railtrack-39423) (#$emittingSound #$train-32488)

Of course, there would be optimizations, etc, such as dialogs between the camera and the IA.

Now, when the camera reports the approaching train (or even before, there might be a system which is devoted to general purpose collision avoidance, which has already contacted the IA with information of the approaching train), the IA makes the inference that there is

(#$likelyEvent (#$and (#$dies OurPoorFriendForWhomWeGrieve) (#$collisionWith #$train-32488)))

I.E. It is able to prove this. <Show cyc knowledge base>. In which case the IA contacts the agent present with the child. The agent advises the child to stop, 100 feet in advance of the approaching train. This child, if contacted in advance, would not have died.

Here we have shown how this child's death may have been prevented using existing technology. Now, people might make the claim that this is a pathological singular point and that in reality we cannot prevent such things in all cases. Their argument is overly general and, for what?! The fact is that it is both obviously highly important to protect people from accidents, and it is also not a singular point, it is the rigour and completeness of the search procedures of logic that make this possible, and it does not rely on serendipity. It cannot prove everything, but it could have saved the lives of everyone I have personally known who has died, and it can do much more than that.

I am motivated on account of the horrible experience of the similar way in which my beloved sister died and an intuitive understanding of the underlying technology to pursue without hesitation or disruption the development of this technology, and I sincerely hope that people will join me in this effort. (Cruelly, she alone would have supported my work unconditionally.) I have obtained what I feel is the main necessary result, that we may consider for the purposes of our system the world as a large set of problems, which need to be decided, but for any program offered as a solution, there are always much larger fragments of the problems that are not solvable by that program. Therefore I believe it to be the case that by collecting and distributing complex and intelligent software via the Debian GNU/Linux system, we will be rapidly reducing the number of unsolvable problems of logic, which when they remain unsolved, are exactly the cause of every accident in the world and a majority of psychological suffering.
 
A Solution to Weak Weak Artificial Intelligence PDF Print E-mail
Written by Administrator   
Thursday, 25 September 2008 23:44

Artificial Intelligence may be controversial, but the limitations of computers have been spelled out explicitly for at least 70 years, if not thousands.<p>

So in what sense do we take the term Artificial Intelligence? Well, we do not take it in the frame of reference that the reader may be accustomed to. The fact is that we interpret it very differently than is standardly interpretted. I am not saying this to distinguish myself, but only because if I didn't I would be blamed later because they would say, "Oh, well that's not what everybody thinks is Artificial Intelligence".

But so far in this paper we haven't defined it with even one axiom. And yet, I still cannot be sure that anyone reading this really understands that, as far as we've defined it thus far in this paper, it is not conclusively disprovable that Artificial Intelligence is not Mineral Spirits, that it is not a position in government, or that it is not a shade of blue.

Let's, for starter's look at a few hypothetical definitions of AI.

HYPO1) AI is a form of cat wrench.

Well, what does this mean. What is a "cat wrench"? Well obviously the syntactic term "cat wrench" appears to be something in our taxonomy. It says 'AI "is a form of"' seems to imply that the class of "cat wrench" subsumes "AI". But obviously none of these has been defined sufficiently.

Let us also switch now from refering to "AI" but let us restrict it further to "weak weak AI".

HYPO2) "WWAI" is a major geopolitical event in the first part of the 20th century.

All I am doing is trying to make the point here that any term is subject to its definitions. I am trying to clear the previously held definitions of WWAI for numerous reasons. It is because our present definition of WWAI is motivated by a set of assumptions which may be taken without much hesitation, and then a string of formal inferences which proves conclusively that "Any computing device whatsoever is a WWAI."

A first objection is that this definition is overly broad. But realize that we are operating in a <b>more Socratic way than most people are accustomed to in daily usage of language<b>. It is altogether fitting and proper that we use our language differently in this case. IF we were not to make some different usage out of the language, we would have difficulty in making as much progress.

Socraties is a model of how we ought to look at things. Because Socraties said, and this is my point in everything I have said up to now, that before we ask questions about a thing, we must ask the question "what is the essence of this thing?" So, that is what I am asking, what is the essence of "WWAI". I can be relatively sure that since, obviously, "WWAI" is not a term that anyone has used before, that no one knows anything about it, and therefore must be willing to take note of my definitions.

Socraties would often take a conventional interpretation, and apply a sequence of inferences, known as a proof, to disprove that conventional interpretation.

So without further adieu I shall spell out all the assumptions and the inference which leads to WWAI.

Assumption 1:

(End of original article, obviously, this article was never completed).

 
"Untangling the Truth" Analysis PDF Print E-mail
Written by Administrator   
Monday, 22 September 2008 22:36

A novel, simple and effective method of untangling the truth from lies we are told that is easy enough for any intellectual to use.

Imagine that you have many ropes in a hopelessly tangled mess. Is this not the truth about truth and lies we are told?

How does one untangle this complicated mess?

LOGICAL FALLACIES

The first step is that one must be able to recognize when a fallacy is being employed to support an argument. Since the truth consists in knowing which assertions are correct and which aren't, the proper methods of proving assertions are necessary. If someone argues unfairly - i.e. invokes a fallacy to support a step of their argument, you must be able to confute them. It follows that one must know how to recognize a fallacy. So study these. By our analogy, each time you recognize a fallacy, you have succeeded in undoing part of the knot.

GO

Believe it or not, the game of Go is necessary to our theoretical analysis. Why is this the case? Go is a game in which two players fight for control of territory. There is a large conceptual framework and enormous literature on which techniques work. The major concepts in Go are cutting and connecting. You can strengthen weaker groups of stones by connecting to stronger groups, and you can weaken them and sometimes kill them by cutting them off from stronger groups.

These two tools, and indeed, many tools in Go, are very useful to the propagandaist. Think of an argument as a group of stones. If you don't wish a person to believe that what this argument proves is true, and in this case that would be any of the premises, intermediate assertions, and conclusions of the argument, you can kill the argument. You can do this but cutting it at the premises, cutting it at an intermediate step, or cutting at the conclusion. Of course, logical fallacies are what one cuts otherwise valid arguments with. So, the propagandist employs fallacies as his stones, and the student of truth employs truths as his stones, with assertions being the points of play and the rules of inference as the edges of the board graph. If the game is to fight over other peoples' territory, i.e. mindshare, they can also employ fallacies.

There are many properties of the game of Go that also hold true of propaganda techniques. It follows that many of the conceptual tools from Go have a either a direct analog or can be modified to function in counterpropaganda applications.

PUTTING IT ALL TOGETHER

So you have two areas to study - Logical Fallacies and Go. It would also help to ensure familiarity with formal logic, topology and graph theory, as this would clarify the true properties wrt argumentation. Then, by identifying where stones have been placed to cut certain arguments, you can begin to gain control of larger territories of the truth.

EXERCISES

Develop a formal theory of the basic moves behind employing fallacies in propaganda/counter-propaganda applications. Write a generic graph-based graphical visualizer for these moves which displays verticies/assertions and edges/rules-of-inference. Develop an NLP program using any variety of methods for recognizing/categorizing logical fallacies with a high degree of accuracy. Process texts using this tool and employ shape analysis from Go to visualize mindshare. From this visualization - assess the positional attributes using transformed Go positional analysis techniques to develop an assessment criteria for counter-propaganda strategies. Develop a feedback system that measures the real-world effectiveness of these strategies. Use these techniques to convince others to work with you in your fight to know the truth - a necessary condition for (true) freedom.
Last Updated ( Thursday, 25 September 2008 22:16 )
 
One Possible Problem with Seed-AI PDF Print E-mail
Written by Administrator   
Thursday, 25 September 2008 22:01

The FRDCSA is a project which works towards the creation of AI through practical methods. AI is the theory of building intelligently behaving artificial systems. Strong AI (SAI) holds that human intelligence is Turing equivalent and therefore computer programs which fully model human intelligence are possible. Weak AI (WAI), on the other hand, asserts that humans are not Turing equivalent and that therefore we may be concerned with writing increasingly intelligent programs, but these must fall short of human intelligence.

We take the reasonable view that both SAI and WAI contain an unproven assertion, namely, that which decides the question of human/computer equivalence. Therefore, we consider a resticted theory of AI we designate AI-, the intersection of the axioms of SAI and WAI.

Reasoning in this manner, we attempt to create intelligent programs. The most obvious approach to bootstrap AI is the concept of seed AI. This is the idea that the AI researcher requires only to write a simple program which, by virtue of unprecidented historical circumstances and the authors own superior design, is able to grow to be increasingly intelligent.

While in and of itself possible, it relies on certain environmental assumptions which are not known to exist, and therefore does not succeed with certainty. To see the flaw in the crude argument for seed AI, it is best to consider the restatement "I am going to write a program that writes a smarter program."

When this idea becomes formalized, it is equivalent to this: I am going to write a program A that writes another program B that solves problems (aka. proves theorems) that A cannot.

This however is what is called a transitive closure violation. Suppose A cannot prove Phi, yet A writes B and B proves Phi. This is equivalent to A proves Phi (transitively). Contradiction.

While this proof sketch is not convincing outside of the proper setting of recursion theory, hopefully it conveys the idea. In the proper setting, it is evident that B is only part of the intermediate execution of A.

Most new AI researchers, who understand the goal but not the means, will attempt to continue to justify the concept. They will resort to having an oracle O which provides information to A, and thus A is not affected in this way. But this idea is simply a relativization of the same fundamental error. In other words, one has only to bound the size of the oracle input and then to demonstrate that the totality of any additional problems solved must be provided by the environment.
Last Updated ( Thursday, 25 September 2008 22:06 )
 
<< Start < Prev 1 2 3 Next > End >>

Page 3 of 3

Login Form



Polls

FRDCSA should prioritize
 
Free template 'Feel Free' by [ Anch ] Gorsk.net Studio. Please, don't remove this hidden copyleft!