# Reading Group Archive (2010)

## LLVM

On December 17 we looked at the Clang C/C++/Objective-C front-end for the LLVM compiler infrastructure. In particular, we will watch the State of Clang video from the 2009 LLVM Developer's Meeting.

On December 10 we looked at the Haskell bindings for the LLVM compiler infrastructure. As a starting point, we looked at the following blog posts: one, two, and three.

On November 26, Tim Morton presented his COMP188 project "LLVM IR Tree Encoding in Scala".

Abstract

LLVM as a compiler framework presents a wide number of attractive tools for language developers, such as code generation support for a wide number of target platforms, and a modern source/target independent optimizer, all built around a well specified intermediate code representation. Furthermore, the LLVM core libraries are well-documented. The LLVM core libraries are written in C+, however, and Scala also presents other attractive features for compiler development, such as functional language features and excellent support for pattern matching, in contrast to C+. It would be advantageous if we could devise a way to best utilise LLVM/Scala¿s respective features, and obtain ¿the best of both worlds¿ for language developers.

The aim of this work is to investigate doing exactly that, to construct a library where front-end compilation can be carried out in Scala, and back-end compilation is completed in LLVM, with minimal effort - by taking advantage of the fact that the LLVM compiler framework is built around the LLVM intermediate representation.

## Parsing and unparsing combinators

On October 29 and November 5, we discussed a paper that describes a new method for constructing parser combinators that guarantee termination while still allowing many forms of left recursion.

On October 22, we discussed a paper that describes "Syntactic Descriptions", a method for combining the parsing and unparsing aspects of a language description.

## Effect Systems

On October 1, we discussed a paper that describes a generic system for statically reasoning about effects in programs.

## Applied Type System (ATS)

On September 24, we discussed a paper that describes the Applied Type System method for combining programming with theorem proving.

## Polymorphic Type inference

On September 17, we discussed a paper that presents a another method for type inference in polymorphic functional languages.

On September 10, we discussed a paper that presents a method for type inference in polymorphic functional languages.

## Lazy Evaluation

On September 3, we discussed a paper that builds on last week's paper to derive an abstract machine implementation of lazy evaluation.

On August 27, we discussed a paper on semantics of lazy evaluation in functional languages.

## System Modelling and Design using Event-B

On August 20, Ken Robinson from UNSW talked about system modelling and design using Event-B and the Rodin tool. Ken has kindly agreed to let members of the group read a draft of his upcoming book on this topic. Chapters 2 and 3 give a basic introduction, with Chapters 4 and 5 providing more depth.

## Verified Programming Language Processing

On August 6 we discussed a paper that describes the Ott tool and the support that it provides for programming language semantics researchers.

On July 30 we discussed a paper that describes how referenced attribute grammars have been verified using the Coq proof assistant.

## Compiler Correctness

On June 25 we discussed a paper that describes how programs can be extracted automatically from proofs of correctness using the Coq proof assistant.

On June 18 we continued discussing the CompCert project by walking through some of the formal specification of the Clight to Cminor translation and correctness proof 1.

On June 11 we continued discussing the CompCert project by considering a paper on verification of the compiler front-end.

On May 28 we discussed a paper by Xavier Leroy on formal verification of compilers.

## Running the NIM Next-Generation Weather Model on GPUs

On May 21 we hosted the following talk:

Running the NIM Next-Generation Weather Model on GPUs

Mark Govett

NOAA, Boulder, Colorado

We are using Graphical Processor Units (GPU)s to run a new weather model, called the Non-hydrostatic Icosahedral Model (NIM), being developed at NOAA¿s Earth System Research Laboratory (ESRL). The parallelization approach is to run the entire model on the GPU and only rely on the CPU for model initialization, I/O, and inter-processor communications. We have written a compiler, using the Eli Compiler Construction Tool, to convert Fortran into CUDA, and used it to parallelize the dynamics portion of the model. Dynamics, the most computationally intensive part of the model, is currently running 34 times faster on a single GPU than the CPU. This presentation will briefly describe the NIM model, our compiler development, and parallelization results to date. I will also describe recent work evaluating commercial Fortran GPU compilers, and progress to date running the NIM on multiple GPUs.

## Scala

On May 14 we watched a short video talk and discussed a paper that explain how type specialization has been added to the Scala implementation.

On May 7 we watched a short video talk and discussed a paper that describe how named and default arguments were added to the Scala language.

On April 9 we discussed a paper that compares facilities for generic programming in Scala with those in Haskell.

On March 26 we discussed a paper on the compilation of Scala structural types for the Java virtual machine.

On March 19 we discussed a paper on the design of the Scala 2.8 collection library.

## Control Flow Analysis

On Tuesday March 2, 4-6pm, E6A357, we hosted the following visitor talk.

Harry Mairson, Brandeis University, Linear Logic and the Complexity of Control Flow Analysis

Static program analysis is about predicting the future: it's what compilers do at compile-time to predict and optimize what happens at run-time. What is the trade-off between the efficiency and the precision of the computed prediction? Control flow analysis (CFA) is a canonical form of program analysis that answers questions like "can call site X ever call procedure P?" or "can procedure P ever be called with argument A?" Such questions can be answered exactly by Girard's geometry of interaction (GoI), but in the interest of efficiency and time-bounded computation, control flow analysis computes a crude approximation to GoI, possibly including false positives.

Different versions of CFA are parameterized by their sensitivity to calling contexts, as represented by a contour¿a sequence of k labels representing these contexts, analogous to Lévy's labelled lambda calculus. CFA with larger contours is designed to make the approximation more precise. A naive calculation shows that 0CFA (i.e., with k=0) can be solved in polynomial time, and kCFA (k>0, a constant) can be solved in exponential time. We show that these bounds are exact: the decision problem for 0CFA is PTIME-hard, and for kCFA is EXPTIME-hard. Each proof depends on fundamental insights about the linearity of programs. In both the linear and non-linear case, contrasting simulations of the linear logic exponential are used in the analysis. The key idea is to take the approximation mechanism inherent in CFA, and exploit its crudeness to do real computation.

This is joint work with David Van Horn (NorthEastern University).

## Agda

On February 26 we discussed a tutorial on Agda, which is a dependently typed functional programming language and proof assistant.

## Language Embedding

On February 19 we discussed Domain specific language implementation via compile-time meta-programming by Tratt which describes a method for implementing domain-specific languages by embedding them into another language.

On February 12 we discussed Formalizing Homogeneous Language Embeddings by Clark and Tratt which describes a method for ensuring safety when multiple domain-specific languages are embedded in a host language.