Chris Latner, the young developer who made a revolution in the C++ world

综合编程 2018-04-09 阅读原文

Maybe almost all C++ developers know the LLVM infrastructure and the Clang compiler. But how many know that Chris Latner is their creator when he has only 25 years old. How it’s possible? I remember when I was 25 years I spend my time to understand the C++ basics ��

The story begins with a thesis

In late 2000, Lattner joined the University of Illinois at Urbana-Champaign as a research assistant and M.Sc. student. While working with Vikram Adve, he designed and began implementing LLVM, an innovative infrastructure for optimizing compilers , which was the subject of his 2002 M.Sc. thesis. He completed his Ph.D. in 2005, researching new techniques for optimizing pointer-intensive programs and adding them to LLVM.

Here’s from his thesis abstract the motivation behind the LLVM design:

Modern programming languages and software engineering principles are causing increasing problems for compiler systems. Traditional approaches, which use a simple compile-link-execute model, are unable to provide adequate application performance under the demands of the new conditions. Traditional approaches to interprocedural and profile-driven compilation can provide the application performance needed, but require infeasible amounts of compilation time to build the application.

This thesis presents LLVM, a design and implementation of a compiler infrastructure which supports a unique multi-stage optimization system. This system is designed to support extensive interprocedural and profile-driven optimizations, while being efficient enough for use in commercial compiler systems.

The LLVM virtual instruction set is the glue that holds the system together. It is a low-level representation, but with high-level type information. This provides the benefits of a low-level representation (compact representation, wide variety of available transformations, etc.) as well as providing high-level information to support aggressive interprocedural optimizations at link – and post-link time. In particular, this system is designed to support optimization in the field, both at run-time and during otherwise unused idle time on the machine.

To resume the idea behind LLVM is to use the LLVM Intermediate Representation (IR), it’s like the bytecode for java or IL for .net.

LLVM IR is designed to host mid-level analyses and transformations that you find in the optimizer section of a compiler. It was designed with many specific goals in mind, including supporting lightweight runtime optimizations, cross-function/interprocedural optimizations, whole program analysis, and aggressive restructuring transformations, etc. The most important aspect of it, though, is that it is itself defined as a first class language with well-defined semantics.

Let’s consider a relatively straightforward function that takes three integer parameters and returns an arithmetic combination of them.

int mul_add(int x, int y, int z) {
  return x * y + z;
}

The LLVM IR for this function will look like:

define i32 @mul_add(i32 %x, i32 %y, i32 %z) {
entry:
  %tmp = mul i32 %x, %y
  %tmp2 = add i32 %tmp, %z
  ret i32 %tmp2
}

Back to the goal of this thesis which focuses on the optimization. To explain this compiler phase I can’t say better than Chris Lattner the father of LLVM in this post :

“To give some intuition for how optimizations work, it is useful to walk through some examples. There are lots of different kinds of compiler optimizations, so it is hard to provide a recipe for how to solve an arbitrary problem. That said, most optimizations follow a simple three-part structure:

  • Look for a pattern to be transformed.
  • Verify that the transformation is safe/correct for the matched instance.
  • Do the transformation, updating the code.

The optimizer reads LLVM IR in, chews on it a bit, then emits LLVM IR, which hopefully will execute faster. In LLVM (as in many other compilers) the optimizer is organized as a pipeline of distinct optimization passes each of which is run on the input and has a chance to do something. Common examples of passes are the inliner (which substitutes the body of a function into call sites), expression reassociation, loop invariant code motion, etc. Depending on the optimization level, different passes are run: for example at -O0 (no optimization) the Clang compiler runs no passes, at -O3 it runs a series of 67 passes in its optimizer (as of LLVM 2.8).

LLVM and Clang as infrastructure to build tools:

A major design concept for Clang is its use of a library-based architecture. In this design, various parts of the front-end can be cleanly divided into separate libraries which can then be mixed up for different needs and uses. In addition, the library-based approach encourages good interfaces and makes it easier for new developers to get involved (because they only need to understand small pieces of the big picture).

The design of LLVM/Clang makes it an infrastructure to build tools. Thanks to its library-based architecture which makes the reuse and integration of new features more flexible and easier to integrate into other projects.

Many tools are based on LLVM and Clang, we can enumerate clang-tidy, clang-query, CppDepend and more.

Clang always up to date with the new standards

Clang is always compliant with any new standard. furthermore, it always provides an experimental support for the coming new features. The current version has experimental support for some proposed features of the C++ standard following C++17, provisionally named C++2a. Note that support for these features may change or be removed without notice, as the draft C++2a standard evolves.

You can use Clang in C++2a mode with the -std=c++2a option.

Language FeatureC++2a ProposalAvailable in Clang?
Default member initializers for bit-fields P0683R1Clang 6
const& -qualified pointers to members P0704R1Clang 6
Allow lambda-capture [=, this] P0409R2Clang 6
__VA_OPT__ for preprocessor comma elision P0306R4Clang 6
Designated initializers P0329R4Partial (extension)
Initializer list constructors in class template argument deduction P0702R1Clang 6
Access checking on specializations P0692R1Partial

Clang is now adopted by many big corporate companies including Google, Microsoft and of course Apple that uses it from the beginning.

The LLVM infrastructure could be used to implement the compilers of many programming languages and interpreters. For example, the PostgreSQL uses now the LLVM JIT to optimize the queries execution as specified here :

Currently, PostgreSQL executes SQL queries using the interpreter, which is quite slow. However, significant speedup can be achieved by compiling query “on-the-fly” with LLVM JIT: this way it’s possible for given SQL query to generate more effective code, which is specialized using the information known at run time , and then additionally optimize it with LLVM. This approach is especially important for complex queries, where performance is CPU-bound.

Thanks to all the LLVM/Clang developers and special thanks to Chris Lattner who initiate the revolution and give us a powerful infrastructure to develop meaningful tools.

Finally, a message to the young developers, be the next Chris Lattner and initiate a revolution in the programming world ��

CppDepend Blog

责编内容by:CppDepend Blog阅读原文】。感谢您的支持!

您可能感兴趣的

Hey, data teams – We’re working on a t... GitLab as a company faces a challenge shared by many — we have lots of data for our engineering organization (via Gi...
Open sourcing NepSnowplow: debugging analytics wit... Open sourcing NepSnowplow: debugging analytics with ease Anyone who has ever built a produc...
HitChain: the Community Powered Blockchain and new... HitChain: the Community Powered Blockchain and newest edition to Huobi Global ...
How Shippo Built and Maintains Versioned APIs BySubhi Beidas, Head of the API team atShippo. The API Team is responsible for developing the API while delivering grea...
【网络编程】半同步–半异步线程池源码分析之任务队列(基于C++11)... 前言 对于C++的学习,感觉如果只看书,学习效率很低。很多新知识新概念理解起来都很困难,而C++11更是引入了更多新的概念和知识。而在学习服务端这部分,什么“同步–异步”也把人搞得很晕 如果看不懂书,看不懂概念,不如就找个例子分析一...