Machine learning & AI

The brain inspires a new type of artificial intelligence

Machine learning, introduced 70 years ago, is based on evidence of the dynamics of learning in the brain. Using the speed of modern computers and large datasets, deep learning algorithms have recently produced results comparable ...


Machine learning to optimize traffic and reduce pollution

Applying artificial intelligence to self-driving cars to smooth traffic, reduce fuel consumption, and improve air quality predictions may sound like the stuff of science fiction, but researchers at the Department of Energy's ...

Computer Sciences

A new machine learning strategy that could enhance computer vision

Researchers from the Universitat Autonoma de Barcelona, Carnegie Mellon University and International Institute of Information Technology, Hyderabad, India, have developed a technique that could allow deep learning algorithms ...

Computer Sciences

New algorithm repairs corrupted digital images in one step

From phone camera snapshots to lifesaving medical scans, digital images play an important role in the way humans communicate information. But digital images are subject to a range of imperfections such as blurriness, grainy ...

Computer Sciences

IBM peers into Numenta machine intelligence approach

Are we nowhere near the limits to which machines can make sense out of raw data? Some scientists would say that today's programmed computers cannot match a computer approach using biological learning principles for next ...

page 2 from 28


In mathematics, computing, linguistics, and related subjects, an algorithm is a finite sequence of instructions, an explicit, step-by-step procedure for solving a problem, often used for calculation and data processing. It is formally a type of effective method in which a list of well-defined instructions for completing a task, will when given an initial state, proceed through a well-defined series of successive states, eventually terminating in an end-state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as probabilistic algorithms, incorporate randomness.

A partial formalization of the concept began with attempts to solve the Entscheidungsproblem (the "decision problem") posed by David Hilbert in 1928. Subsequent formalizations were framed as attempts to define "effective calculability" (Kleene 1943:274) or "effective method" (Rosser 1939:225); those formalizations included the Gödel-Herbrand-Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's "Formulation 1" of 1936, and Alan Turing's Turing machines of 1936–7 and 1939.

This text uses material from Wikipedia, licensed under CC BY-SA