A Brief History of British Artificial Intelligence: From Alan Turing to DeepMind


[Netease smart news October 16 news] the famous British computer scientist Alan Turing in the 1950 published paper "computer and intelligence", the development of artificial intelligence and artificial intelligence philosophy has made extensive research. In 1936, Turing had developed the principles of modern computers and played a key role in the process of cracking codes in Bletchley Park during World War II. In his 1950 paper, Turing explored the meaning of "machine" and "thinking." In the subsequent "Turing Test," he proposed that if a machine's dialogue cannot be distinguished from human dialogue, then Say this machine can "think". His early research in computer science was sent to the London Mathematical Society and proved that all digital computers have the same function (that is, as long as there is enough memory and time, any computer can simulate The behavior of all other computers. This experiment expresses a powerful, elegant and precise concept. This paper is still widely read, discussed, quoted and included in the selection.

Early researchers in the field of artificial intelligence focused on developing the necessary tools and techniques to help them explore Turing’s ideas. The early methods were primarily aimed at symbolic programming (that is, the ability to manipulate expressed programs in their own programming language), which is the most promising example. Many special purpose languages ​​are written with this motivation. The most famous of these is the LISP language of the United States, but it also includes important contributions from the UK, such as POP-2 (by Robin Pop and Roderick at the University of Edinburgh (Bersto invent) and Edinburgh Prolog (invented by David Warren of the University of Edinburgh).

In 1952, Christopher Stragg wrote the game of draughts using the Ferranti?Mark?1 system at the University of Manchester, and later wrote a love letter. Artificial intelligence has been involved in more and more complex games, which has always been a sign of its progress.

Another former Bletchley codebreaker was Donald Mitch, who later served as head of the Department of Mechanical Intelligence and Perception in Edinburgh. The MENACE program he invented at that time was too complicated for the computer at the time, and he initially used 300 matchboxes to implement the process.

By the 1960s, artificial intelligence technology was applied to more complex problems and applied to practice. Early plans included the development of strategies to solve problems that could be solved step by step, such as automatic reasoning or planning basis (first created by Alan Bondi).

Understanding natural language is another important part; for example, Karen Sprck Jones invented the method of retrieving information from documents, and Yorick Wilks's preference semantics is a calculation method to eliminate word ambiguity, not only for artificial intelligence. The contribution, but also directly challenges the dominant Chomsky paradigm in linguistics. Both of them are alumni of the Cambridge Language Research Group, a legendary melting pot of computer linguistics founded by Wittgenstein's student Margaret Masterman.

In subsequent developments, robot systems such as Edinburgh's Freddy I and Freddy II have been able to combine vision, intelligence, versatility, and physical engineering to accomplish tasks such as assembling objects (the need for specialized AI developed for robotics Language). Artificial intelligence systems also have an impact on the subject of cognitive psychology. Researchers include Richard Gregory, Christopher Longing Higgins, Philip Johnson-Laerd, and David Marr, who realized that human cognitive processes can be viewed as a form of calculation. And was imitated as a computer program.

Both globally and in the United Kingdom, artificial intelligence has gone through a period of forward development and also a period of relative stagnation (often referred to as "artificial intelligence winter"). One of the major events occurred in 1973, when Sir James Lighthill published a report on artificial intelligence, which recommended that AI funds be concentrated in a few UK universities. Wright Hill questioned that artificial intelligence at that time could solve the complex problems of the real world by expanding the scale, and indeed, the mainstream methods of the 1960s modeled complex reasoning as a possible decision tree and could easily encounter combinatorial The problem of explosions.

However, in the long run, advances in symbolic programming have led to a deeper understanding of the ability of artificial intelligence to solve complex problems, especially with regard to tools and technologies, which can simulate or support complex expert reasoning. The application of a relatively well-structured field (application in the workplace is ideal).

Knowledge-Based Systems (KBS), known as a knowledge base system, combines artificial intelligence technology with other types of computational reasoning and domain-related expertise to create systems for very common but important real-world applications. The unobtrusive but practical success of KBS helped to resolve Wright Hill’s “pessimistic mood” and paved the way for large-scale capital expansion through the Alvey project. In retrospect, the AI ​​winter we saw was the result of over-hype - supporters exaggerated the false impression of failure, and thus underestimated the important but unfruitful achievements of the study.

From 1983 to 1987, the British Alvey Intelligent Knowledge Base System (IKBS) program was developed in response to the progress of other countries, especially Japan (its five-year projects rely on technology and language, especially from Edinburgh, England. Prolog). Alvey has influenced the development of academic research capabilities and has encouraged industry applications to focus on practical issues that have made progress, especially natural language processing, interfaces, and KBS.

These applications gradually shift the goal of artificial intelligence from making "thinking machines" (this concept has always been a philosophical argument) to a more measurable idea, that is, creating machines that can function if these machines are produced by humans, Then it can be attributed its role to "smart" (a concept implicit in the Turing test). This intelligent manifestation may have been generated by a "brute force" approach that neither reflects nor attempts to reflect human problems. Interestingly, Britain has cultivated many important philosophers who helped to uncover the concepts behind these differences, such as Margaret Boden and Andy Clarke.

After Alvey project, the investment of AI dropped again, but the prospects in this field have improved, because the new programming method no longer depends on the linear combination of symbolic reasoning. Although symbolic programming is the simplest type of programming in human language, analog natural technology can also infer much information from the perceptual environment (eg, information from the senses) because they do not include direct representations of declarative or hypothetical knowledge.

An example of inspiration from nature is the genetic algorithm, which encodes a program as a set of “genes” and then modifies it in a way that mimics evolution to find a “conformity” with the ever-changing environment (early projects included Chad Forsyth's Beagle system for pattern recognition). The other is a neural network or connectionism system in which artificial "neurons" are connected to each other in a system that acts like a human brain and is stimulated or suppressed by multiple "neurons." Like symbolic artificial intelligence, researchers often reverse inference when mimicking the human brain to improve their performance (eg, the back propagation algorithm developed by Jeffrey Hinton), but by Steve Eve The large-scale neural network led by Bob SpiNaker (2005-) is still the traditional paradigm of direct brain modeling. Other non-traditional computing methods related to artificial intelligence also include parallel processing (parallel processing of multiple processors to solve problems), multi-agent systems (in an environment with many intelligent automatic agent interactions) and machine learning (algorithms can be learned in Find important structures in the data and identify interesting patterns through training.)

Other countries and international companies are investing heavily in artificial intelligence development, but the UK is still considered a professional center for artificial intelligence research and application, at least for the time being. For example, the two founders of DeepMind met at the University of London College of Computer Neuroscience Group, and the founding director of this group was Jeffrey Hinton. The United Kingdom can continue to work hard on Turing’s heritage and followers and continue to be one of the important centers of artificial intelligence.

Dr. Kieron o'hara, Associate Professor and Principal Investigator, Department of Electronics and Computer Science, University of Southampton. (From: British government report excerpt compilation: NetEase see the intelligent compiler platform review: Li Qing)

Pay attention to NetEase smart public number (smartman163), get the latest report of artificial intelligence industry.

Valve plug

SVLEC provide valve pulg form B 10mm with cable , the 24V valve plug always used as an Actuator . Form B valve plug with the pin space 10 cm , the free end cable can be field wireable with M12 or M8 connector . cable wire 3*0.75mm , we have 24V AC/DC with LED type , 24V AC/DC with LED and suppression , 110V AC/DC with LED and suppression , 0-230V AC/DC without electronics , 24V AC/DC with LED and suppression for select.

Valve plug,DIN43650 Valve plug,Form A Valve Plug,valve plug B form , Valve plug C form

Kunshan SVL Electric Co.,Ltd , https://www.svlelectric.com

Posted on