Google
KORFEZ

13 Eylül 2007 Perşembe

Artificial intelligence -YAPAY ZEKA














The modern definition of artificial intelligence (or AI) is "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success.John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."Other names for the field have been proposed, such as computational intelligence, synthetic intelligence or computational rationality. The term artificial intelligence is also used to describe a property of machines or programs: the intelligence that the system demonstrates.
AI research uses tools and insights from many fields, including computer science, psychology, philosophy, neuroscience, cognitive science, linguistics, operations research, economics, control theory, probability, optimization and logic. AI research overlaps with tasks such as robotics, control systems, scheduling, data mining, logistics, speech recognition, facial recognition and many others.

Mechanisms
Generally speaking AI systems are built around automated inference engines including forward reasoning and backwards reasoning. Based on certain conditions ("if") the system infers certain consequences ("then"). AI applications are generally divided into two types, in terms of consequences: classifiers ("if shiny then diamond") and controllers ("if shiny then pick up"). Controllers do however also classify conditions before inferring actions, and therefore classification forms a central part of most AI systems.
Classifiers make use of pattern recognition for condition matching. In many cases this does not imply absolute, but rather the closest match. Techniques to achieve this divide roughly into two schools of thought: Conventional AI and Computational intelligence (CI).
Conventional AI research focuses on attempts to mimic human intelligence through symbol manipulation and symbolically structured knowledge bases. This approach limits the situations to which conventional AI can be applied. Lotfi Zadeh stated that "we are also in possession of computational tools which are far more effective in the conception and design of intelligent systems than the predicate-logic-based methods which form the core of traditional AI." These techniques, which include fuzzy logic, have become known as soft computing. These often biologically inspired methods stand in contrast to conventional AI and compensate for the shortcomings of symbolicism.These two methodologies have also been labeled as neats vs. scruffies, with neats emphasizing the use of logic and formal representation of knowledge while scruffies take an application-oriented heuristic bottom-up approach.

Classifiers
Classifiers are functions that can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set.
When a new observation is received, that observation is classified based on previous experience. A classifier can be trained in various ways; there are mainly statistical and machine learning approaches.
A wide range of classifiers are available, each with its strengths and weaknesses. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem. Various empirical tests have been performed to compare classifier performance and to find the characteristics of data that determine classifier performance. Determining a suitable classifier for a given problem is however still more an art than science.
The most widely used classifiers are the neural network, support vector machine, k-nearest neighbor algorithm, Gaussian mixture model, naive Bayes classifier, and decision tree.

Conventional AI
Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:
Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them.
Case based reasoning: stores a set of problems and answers in an organized data structure called cases. A case based reasoning system upon being presented with a problem finds a case in its knowledge base that is most closely related to the new problem and presents its solutions as an output with suitable modifications.


Bayesian networks
Behavior based AI: a modular method of building AI systems by hand.

Computational intelligence
Computational intelligence involves iterative development or learning (e.g., parameter tuning in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Subjects in computational intelligence as defined by IEEE Computational Intelligence Society mainly include:
Neural networks: trainable systems with very strong pattern recognition capabilities.
Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in modern industrial and consumer product control systems; capable of working with concepts such as 'hot', 'cold', 'warm' and 'boiling'.
Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g., genetic algorithms) and swarm intelligence (e.g., ant algorithms).
With hybrid intelligent systems, attempts are made to combine these two groups. Expert inference rules can be generated through neural network or production rules from statistical learning such as in ACT-R or CLARION (see References below). It is thought that the human brain uses multiple techniques to both formulate and cross-check results. Thus, systems integration is seen as promising and perhaps necessary for true AI, especially the integration of symbolic and connectionist models (e.g., as advocated by Ron Sun).

AI programming languages and styles
AI research has led to many advances in programming languages including the first list processing language by Allen Newell et al., Lisp dialects, Planner, Actors, the Scientific Community Metaphor, production systems, and rule-based languages.
GOFAI TEST research is often done in programming languages such as Prolog or Lisp. Matlab and Lush (a numerical dialect of Lisp) include many specialist probabilistic libraries for Bayesian systems. AI research often emphasises rapid development and prototyping, using such interpreted languages to empower rapid command-line testing and experimentation. Real-time systems are however likely to require dedicated optimized software.
Many expert systems are organized collections of if-then such statements, called productions. These can include stochastic elements, producing intrinsic variation, or rely on variation produced in response to a dynamic environment.

Americas

The Americas are the lands of the Western hemisphere or New World consisting of the continents of North America and South America with their associated islands and regions. The Americas cover 8.3% of the Earth's total surface area (28.4% of its land area) and contain about 14% of the human population (about 900 million people). The Americas may alternatively be referred to as America; however, America may be ambiguous as it can refer to either this entire landmass or just the United States of America.


History
History of the Americas

Formation
South America broke off from Western
Gondwanaland around 135 million BCE, forming its own continent.Starting around 15 million BCE, the collision of the Caribbean Plate and the Pacific Plate resulted in a series of volcanoes along the border that created a number of islands. The gaps in the archipelago of Central American filled in with material eroded off North America and South America, plus new land created by continued volcanism. By 3 million BCE, the continents of North America and South America were linked by the Isthmus of Panama, thereby forming the single landmass of the Americas.

Settlement
See also:
Pre-Columbian trans-oceanic contact
Archaeological finds establish the widespread presence of the
Clovis culture in North America and South America around 10000 BCE.Whether this is the first migration of humans into North America and South America is disputed, with alternative theories holding that humans arrived in North America and South America as early as 40000 BCE.
The
Inuit migrated into the Arctic section of North America in another wave of migration, arriving around 1000 CE. Around the same time as the Inuit migrated into North America, Viking settlers began arriving in Greenland in 982 and Vinland shortly thereafter. The Viking settlers quickly abandoned Vinland, and disappeared from Greenland by 1500.
Large scale European colonization of the Americas began shortly after the voyages of Christopher Columbus in 1492. The spread of new diseases brought by Europeans and Africans killed most of the inhabitants of North America and South America, with a general population crash of Native Americans occurring in the mid sixteenth century, often well ahead of European contact. Native peoples and European colonizers came into widespread conflict, resulting in what some scholars have labelled a genocide of the natives. Early European immigrants were often part of state-sponsored attempts to found colonies in the Americas. Migration continued as people moved to the Americas fleeing religious persecution or seeking economic opportunities. Many individuals were forcibly transported to the Americas as slaves, prisoners or indentured servants.

Naming
World Map of Waldseemüller which first named America (in the map over Paraguay). Germany, 1507
The earliest known use of the name America for this particular landmass dates from
April 25, 1507. It appears on a globe and a large map created by the German cartographer Martin Waldseemüller in Saint-Dié-des-Vosges. An accompanying book, Cosmographiae Introductio, explains that the name was derived from the Latinized version of the explorer Amerigo Vespucci's name, Americus Vespucius, in its feminine form, America, as the other continents all have Latin feminine names.
Vespucci's role in the naming issue, like his exploratory activity, is unclear. Some sources say that he was unaware of the widespread use of his name to refer to the new landmass. Christopher Columbus, who had first brought the region's existence to the attention of Renaissance era voyagers, had died in 1506 (believing, to the end, that he'd discovered and colonized part of India) and could not protest Waldseemüller's decision.

Map of America by Jonghe, c. 1770.
A few alternative theories regarding the landmass' naming have been proposed, but none of them has achieved any widespread acceptance.
One alternative, first advanced by
Jules Marcou in 1875 and later recounted by novelist Jan Carew, is that the name America derives from the district of Amerrique in Nicaragua. The gold-rich district of Amerrique was purportedly visited by both Vespucci and Columbus, for whom the name became synonymous with gold. According to Marcou, Vespucci later applied the name to the New World, and even changed the spelling of his own name from Alberigo to Amerigo to reflect the importance of the discovery.
Another theory, first proposed by a
Bristol antiquary and naturalist, Alfred Hudd, in 1908 was that America is derived from Richard Amerike, a merchant from Bristol, who is believed to have financed John Cabot's voyage of discovery from England to Newfoundland in 1497 as found in some documents from Westminster Abbey a few decades ago. Supposedly, Bristol fishermen had been visiting the coast of North America for at least a century before Columbus' voyage and Waldseemüller's maps are alleged to incorporate information from the early English journeys to North America. The theory holds that a variant of Amerike's name appeared on an early English map (of which however no copies survive) and that this was the true inspiration for Waldseemüller.

Computer science



Computer science, or computing science, is the study of the theoretical foundations of information and computation and their implementation and application in computer systems.Computer science has many sub-fields; some emphasize the computation of specific results (such as computer graphics), while others relate to properties of computational problems (such as computational complexity theory). Still others focus on the challenges in implementing computations. For example, programming language theory studies approaches to describing computations, while computer programming applies specific programming languages to solve specific computational problems. A further subfield, human-computer interaction, focuses on the challenges in making computers and computations useful, usable and universally accessible to people.

History
Main article:
History of computer science
The history of computer science predates the invention of the modern digital computer by many centuries. Machines for calculating fixed numerical tasks, such as the abacus, have existed since antiquity. Wilhelm Schickard built the first mechanical calculator in 1623.Charles Babbage designed a difference engine in Victorian times (between 1837 and 1901) helped by Ada Lovelace. Around 1900 the IBM corporation sold punch-card machines.However all of these machines were constrained to perform a single task, or at best, some subset of all possible tasks.
During the 1940s, as newer and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study
computation in general. Computer science began to be established as a distinct academic discipline in the 1960s, with the creation of the first computer science departments and degree programs.Since practical computers became available, many applications of computing have become distinct areas of study in their own right.

Major achievements
This short section requires expansion.
German military used the Enigma machine during World War II for communication they thought to be secret. The large-scale decryption of Enigma traffic at Bletchley Park was an important factor that contributed to Allied victory in WWII.
Despite its relatively short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society. These include:
Applications within computer science
A formal definition of
computation and computability, and proof that there are computationally unsolvable and intractable problems.
The concept of a
programming language, a tool for the precise expression of methodological information at various levels of abstraction
Applications outside of computing
Sparked the
Digital Revolution which led to the current Information Age
In
cryptography, breaking the Enigma machinewas an important factor contributing to the Allied victory in World War II.
Scientific computing enabled advanced study of the mind and mapping the human genome was possible with Human Genome Project.Distributed computing projects like Folding@home explore protein folding.

Relationship with other fields

Wikiquote has a collection of quotations related to:
Edsger Dijkstra
Despite its name, much of computer science does not involve the study of computers themselves. Because of this several alternative names have been proposed. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution applying the datalogy term was DIKU, the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. Also, in the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist.Recently the term computics has been suggested.
In fact, the renowned computer scientist Edsger Dijkstra is often quoted as saying, "Computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. Computer science is sometimes criticized as being insufficiently scientific, a view espoused in the statement "Science is to computer science as hydrodynamics is to plumbing" credited to Stan Kelly-Bootle and others. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research has also often crossed into other disciplines, such as artificial intelligence, cognitive science, physics (see quantum computing), and linguistics.
Computer science is considered by some to have a much closer relationship with
mathematics than many scientific disciplines. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel and Alan Turing, and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between computer science and
software engineering is a contentious issue, which is further muddied by disputes over what the term "software engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.