Jump to content

Pat Langley

From Wikiquote

Pat Langley (born May 2, 1953) is an American cognitive scientist and AI researcher, Honorary Professor of Computer Science at the University of Auckland, and Director of the Institute for the Study of Learning and Expertise. He coined the term decision stump and was founding editor of journals Machine Learning and Advances in Cognitive Systems.

Quotes

[edit]
  • New heuristic (1) is used to prefer revision to premises that support relatively weak generalized beliefs.
    • Donald Rose & Pat Langley (1987). "Belief revision and induction." Proceedings of the Ninth Annual Conference of the Cognitive Science Society. (pp. 748–752). Seattle, WA: Lawrence Erlbaum. p. 750
  • In recent years, researchers have made considerable progress on the worst-case analysis of inductive learning tasks, but for theoretical results to have impact on practice, they must deal with the average case. In this paper we present an average-case analysis of a simple algorithm that induces one-level decision trees for concepts defined by a single relevant attribute. Given knowledge about the number of training instances, the number of irrelevant attributes, the amount of class and attribute noise, and the class and attribute distributions, we derive the expected classification accuracy over the entire instance space. We then examine the predictions of this analysis for different settings of these domain parameters, comparing them to experimental results to check our reasoning.
    • Wayne Iba and Pat Langley (1992); "Induction of One-Level Decision Trees," in ML92: Proceedings of the Ninth International Conference on Machine Learning, Aberdeen, Scotland, 1–3 July 1992, San Francisco, CA: Morgan Kaufmann, pp. 233–240; abstract

Scientific discovery: Computational explorations of the creative processes. 1987

[edit]

Pat Langley, Herbert A. Simon, Gary L. Bradshaw, and Jan M. Zytkow, Scientific discovery: Computational explorations of the creative processes. MIT press, 1987.

  • Science is a seamless web: each idea spins out to a new research task, and each research finding suggests a repair or an elaboration of the network of theory. Most of the links connecting the nodes are short, each attaching to its predecessors. Weaving our way through the web, we stop from time to time to rest and survey the view — and to write a paper or a book.
    • p. v; Preface
  • In the scientist’s house are many mansions... Outsiders often regard science as a sober enterprise, but we who are inside see it as the most romantic of all callings. Both views are right. The romance adheres to the processes of scientific discovery, the sobriety to the responsibility for verification...
Histories of science put the spotlight on discovery... The story of scientific progress reaches its periodic climaxes at the moments of discovery... In the philosophy of science, all the emphasis is on verification, on how we can tell the true gold of scientific law from the fool’s gold of untested fantasy. In fact, it is still the majority view among philosophers of science that only verification is a proper subject of inquiry, that nothing of philosophical interest can be said about the process of discovery...
But we believe that science is also poetry, and—perhaps even more heretical—that discovery has its reasons, as poetry does. However romantic and heroic we find the moment of discovery, we cannot believe either that the events leading up to that moment are entirely random and chaotic or that they require genius that can be understood only by congenial minds. We believe that finding order in the world must itself be a process impregnated with purpose and reason.
  • p. 3; as cited in: Richard P. Gabriel, and Kevin J. Sullivan. "Better science through art." ACM Sigplan Notices. Vol. 45. No. 10. ACM, 2010.
  • BACON.4 does not have heuristics for considering trigonometric functions of variables directly . Thus, in the run described here we simply told the system to examine the sines. In the following chapter we will see how BACON can actually arrive at the sine term on its own in a rather subtle manner.
    • p. 142, note 5
  • In all of these cases, the error arose from accepting “loose” fits of a law to data, and the later, correct formulation provided a law that fit the data much more closely. If we wished to simulate this phenomenon with BACON, we would only have to set the error allowance generously at the outset, then set stricter limits after an initial law had been found.
    • p. 224

"Selection of relevant features and examples in machine learning," 1997

[edit]

Avrim Blum and Pat Langley. "Selection of relevant features and examples in machine learning." Artificial intelligence 97.1 (1997): 245-271.

  • As machine learning aims to address larger, more complex tasks, the problem of focusing on the most relevant information in a potentially overwhelming quantity of data has become increasingly important.
    • p. 245
  • Given a sample of data S, a learning algorithm L, and a feature set A, feature xi , is incrementally useful to L with respect to A if the accuracy of the hypothesis that L produces using the feature set {xi} ∪ A is better than the accuracy achieved using just the feature set A.
    • p. 248; definition of incremental usefulness

"Intelligent behavior in humans and machines," 2006

[edit]

Pat Langley, "Intelligent behavior in humans and machines." American Association for Artificial Intelligence. 2006.

  • A cognitive architecture specifies aspects of an intelligent system that are stable over time, much as in a building’s architecture. These include the memories that store perceptions, beliefs, and knowledge, the representation of elements that are contained in these memories, the performance mechanisms that use them, and the learning processes that build on them. Such a framework typically comes with a programming language and software environment that supports the efficient construction of knowledge-based systems.
    • p. 6
  • Research on cognitive architectures varies widely in the degree to which it attempts to match psychological data. ACT-R (Anderson & Lebiere, 1998) and EPIC (Kieras & Meyer, 1997) aim for quantitative fits to reaction time and error data, whereas Prodigy (Minton et al., 1989) incorporates selected mechanisms like means-ends analysis but otherwise makes little contact with human behavior. Architectures like Soar (Laird, Newell, & Rosenbloom, 1987; Newell, 1990) and Icarus (Langley & Choi, in press; Langley & Rogers, 2005) take a middle position, drawing on many psychological ideas but also emphasizing their strength as flexible AI systems. What they hold in common is an acknowledgement of their debt to theoretical concepts from cognitive psychology and a concern with the same intellectual abilities as humans.
    • p. 7
[edit]
  • Pat Langley, Institute for the Study of Learning and Expertise