The objective of this course is to learn basic concepts related to ensuring that systems behave as they are expected to. Ensuring that systems behave in a reliable, correct manner is crucial for safety-critical applications. For such applications, all possible operation scenarios must be considered, and bugs can prove to be very costly. Traditional testing can often overlook some unlikely behaviour that might prove disastrous later, and does not suffice to provide the kind of definitive guarantees that we require of such systems.
One way to get guarantees about safe operation is via the use of formal verification. In this course, we will study how to formally represent real-world systems using various abstract models like logic and automata, and how to express and verify properties that capture desirable system behaviour. In addition, since verification by hand is cumbersome and often error-prone, we will study tools that are often used to automate this process.
Topics we hope to cover in this course include various temporal logics, Hoare logic, SMT solving, and theorem proving.
The course involves concepts from logic, automata theory, algorithms, proof theory, and programming languages. Understanding the material will, therefore, require some mathematical acumen -- students should, at the very least, be able to write and understand rigorous formal proofs, especially those involving induction. Students will also be required to be conversant with basic concepts of propositional/predicate logic, syntax/semantics, automata, and algorithms. A very basic idea of complexity theory is also necessary; notions like reductions, (un-)decidability, and NP-completeness should not seem unfamiliar. You should have taken both of the following courses (or equivalents thereof):
Audit policy: B- or above, with at least 80% in the first quiz.
Pass criterion for credit: 40% or above, with at least 80% in the first quiz.