Undergraduate / Computer Science Major (BS/BA) / CS Core, Breadth, and Depth RequirementsCS Depth: Artificial Intelligence
The courses below fulfill the Depth: Artificial Intelligence requirement in computer science.
A laboratory-based introduction to robotics. Focus will be on both hardware (sensors and actuators) and software (sensor processing and behavior development). Topics will include: the basics in kinematics, dynamics, control, and motion planning; and an introduction to Artificial Intelligence (AI) and Machine Learning (ML). Formerly EECS 295. This course fulfills the AI Depth requirement.
Introduction to Lisp and programming knowledge-based systems and interfaces. Strong emphasis on writing maintainable, extensible systems. Topics include: semantic networks, frames, pattern matching, deductive inference rules, case-based reasoning, discrimination trees. Project-driven. Substantial programming assignments.
A semantics-oriented introduction to natural language processing, broadly construed. Representation of meaning and knowledge inference in story understanding, script/frame theory, plans and plan recognition, counter-planning, and thematic structures. This course satisfies the project requirement
Principles and practice of organizing and building AI reasoning systems. Topics include pattern-directed rule systems, truth-maintenance systems, and constraint languages. This course satisfies the project requirement.
Core techniques and applications of artificial intelligence. Representation retrieving and application of knowledge for problem solving. Hypothesis exploration, theorem proving, vision and neural networks.
Machine Learning is the study of algorithms that improve automatically through experience. Topics covered typically include Bayesian Learning, Decision Trees, Genetic Algorithms, Neural Networks.
Principles and practices of knowledge representation, including logics, ontologies, common sense knowledge, and semantic web technologies. Prerequisite: 3EECS 348, EECS 325, or equivalent experience with artificial intelligence. This course satisfies the project requirement.
Joint with EECS 372. This course focuses on the exploration, construction and analysis of multi-agent models. Sample models from a variety of content domains are explored and analyzed. Spatial and network topologies are introduced. The prominent agent-based frameworks are covered as well as methodology for replicating, verifying and validating agent-based models. We use state of the art ABM and complexity science tools. This course can help satisfy the project course and artificial intelligence area course requirement for CS and CIS majors, and satisfy the breadth requirement in artificial intelligence for Ph.D. students in CS. It also satisfies a design course requirement for Learning Sciences graduate students, counts towards the Cognitive Science specialization and as an advanced elective for the Cognitive Science major.
After a brief introduction to numerical computation issues, the course will continue with a sequence of canonical problem settings (e.g., Intersections; Arrangements/Duality), mostly focusing on the combinatorial aspects of the algorithms and the impact of the data structures. Each part will be casted in respective applications settings (GIS; Motion Planning; etc). The last part of the course will present several potpourri-like topics, e.g., Skeletons/Medial Axis; Davenport-Shinzel sequences.
This is a joint projects class with Medill in conjunction with the newly announced Knight News Innovation Lab at Northwestern. McCormick students (primarily CS and CE majors) and journalism students will join cross-functional teams to assess and develop, from both an audience/market perspective and a technology perspective, a range of technology projects with the ultimate goal of deployment for impact in media and journalism. Some projects may continue over the summer if students are interested.
This course will explore the use of formal knowledge representation and reasoning methods from artificial intelligence in the use of an experimental computer game. Topics include logic programming and the Prolog language, knowledge representation, planning and action selection, and simple natural language dialog.
This course will introduce some of the central topics in computational learning theory, a field which approaches the question "whether machines can learn" from the perspective of theoretical computer science. We will study well defined and rigorous mathematical models of learning where it will be possible to give precise and rigorous analysis of learning problems and algorithms. A big focus of the course will be the computational efficiency of learning in these models. We will develop some provably efficient algorithms and explain why such provable algorithms are unlikely for other models. We will only cover topics which permit a mathematically rigorous analysis and hence mathematical maturity is absolutely necessary. In particular, some familiarity with basic probability (such as linearity of expectation) and basic linear algebra will be necessary. Also, the emphasis of the course will be on proofs, so if you are in this course, you should enjoy proofs and math.
Advances in technology have begun to allow for the production of large groups, or swarms, of robots; however, there exists a large gap between their current capabilities and those of swarms found in nature or envisioned for future robot swarms. These deficiencies are the result of two factors, difficulties in algorithmic control of these swarms, and limitations in hardware capabilities of the individuals. This class surveys the state of the art research that addresses these deficiencies. Coursework includes reading research papers, student presentations and discussion of select papers, and a final project implementing studied topics in a real or simulated robot swarm.
Fundamental and advanced topics in statistical pattern recognition including Bayesian decision theory, Maximum-likelihood and Bayesian estimation, Nonparametric density estimation, Component Analysis and Discriminants, Kernel machines, Feature selection, dimension reduction and embedding, Boosting, Minimum description length, Mixture models and clustering, Spectral clustering, Bayesian network and Hidden Markov models, with the applications to image and video pattern recognition.
A coverage of artificial intelligence, machine learning and statistical estimation topics that are especially relevant for robot operation and robotics research. The focus is on robotics-relevant aspects of ML and AI that are not covered in depth in EECS 348 or EECS 349. Course evaluation will be largely project-based.
Probabilistic graphical models are a powerful technique for handling uncertainty in machine learning. The course will cover how probability distributions can be represented in graphical models, how inference and learning are performed in the models, and how the models are utilized for machine learning in practice.