[Collapse]
Wikipedia relies on your donations: please give
today.
$3,457,215 Our Goal: $6 million
Donate Now »
Compiler
From Wikipedia, the free encyclopedia
Jump to: navigation, search
This article is about the computing term. For the anime, see Compiler (anime).
A diagram of the operation of a typical multi-
language, multi-target compiler.
A compiler is a computer program (or set of programs) that translates text written in a computer language (the source
language) into another computer language (the target language). The original sequence is usually called the source code
and the output called object code. Commonly the output has a form suitable for processing by other programs (e.g., a
linker), but it may be a human-readable text file.
The most common reason for wanting to translate source code is to create an executable program. The name "compiler" is
primarily used for programs that translate source code from a high-level programming language to a lower level language
(e.g., assembly language or machine language). A program that translates from a low level language to a higher level one
is a decompiler. A program that translates between high-level languages is usually called a language translator, source to
source translator, or language converter. A language rewriter is usually a program that translates the form of expressions
without a change of language.
A compiler is likely to perform many or all of the following operations: lexical analysis, preprocessing, parsing, semantic
analysis, code generation, and code optimization.
[Link]
Contents
[hide]
1 History
1.1 Compilers in education
2 Compiler output
2.1 Compiled versus interpreted languages
2.2 Hardware compilation
3 Compiler design
3.1 One-pass versus multi-pass compilers
3.2 Front end
3.3 Back end
4 Related techniques
5 See also
6 Notes
7 References
8 External links
[edit] History
Software for early computers was primarily written in assembly language for many years. Higher level programming
languages were not invented until the benefits of being able to reuse software on different kinds of CPUs started to
become significantly greater than the cost of writing a compiler. The very limited memory capacity of early computers
also created many technical problems when implementing a compiler.
Towards the end of the 1950s, machine-independent programming languages were first proposed. Subsequently, several
experimental compilers were developed. The first compiler was written by Grace Hopper, in 1952, for the A-0
programming language. The FORTRAN team led by John Backus at IBM is generally credited as having introduced the
first complete compiler, in 1957. COBOL was an early language to be compiled on multiple architectures, in 1960.[1]
In many application domains the idea of using a higher level language quickly caught on. Because of the expanding
functionality supported by newer programming languages and the increasing complexity of computer architectures,
compilers have become more and more complex.
Early compilers were written in assembly language. The first self-hosting compiler — capable of compiling its own
source code in a high-level language — was created for Lisp by Hart and Levin at MIT in 1962.[2] Since the 1970s it has
become common practice to implement a compiler in the language it compiles, although both Pascal and C have been
popular choices for implementation language. Building a self-hosting compiler is a bootstrapping problem -- the first such
compiler for a language must be compiled either by a compiler written in a different language, or (as in Hart and Levin's
Lisp compiler) compiled by running the compiler in an interpreter.
[edit] Compilers in education
Compiler construction and compiler optimization are taught at universities as part of the computer science curriculum.
Such courses are usually supplemented with the implementation of a compiler for an educational programming language.
A well-documented example is Niklaus Wirth's PL/0 compiler, which Wirth used to teach compiler construction in the
1970s.[3] In spite of its simplicity, the PL/0 compiler introduced several influential concepts to the field:
1. Program development by stepwise refinement (also the title of a 1971 paper by Wirth[4])
2. The use of a recursive descent parser
3. The use of EBNF to specify the syntax of a language
4. A code generator producing portable P-code
5. The use of T-diagrams in the formal description of the bootstrapping problem
[Link]
[edit] Compiler output
One classification of compilers is by the platform on which their generated code executes. This is known as the target
platform.
A native or hosted compiler is one whose output is intended to directly run on the same type of computer and operating
system as the compiler itself runs on. The output of a cross compiler is designed to run on a different platform. Cross
compilers are often used when developing software for embedded systems that are not intended to support a software
development environment.
The output of a compiler that produces code for a virtual machine (VM) may or may not be executed on the same
platform as the compiler that produced it. For this reason such compilers are not usually classified as native or cross
compilers.
[edit] Compiled versus interpreted languages
Higher-level programming languages are generally divided for convenience into compiled languages and interpreted
languages. However, there is rarely anything about a language that requires it to be exclusively compiled, or exclusively
interpreted. The categorization usually reflects the most popular or widespread implementations of a language — for
instance, BASIC is sometimes called an interpreted language, and C a compiled one, despite the existence of BASIC
compilers and C interpreters.
In a sense, all languages are interpreted, with "execution" being merely a special case of interpretation performed by
transistors switching on a CPU. Modern trends toward just-in-time compilation and bytecode interpretation also blur the
traditional categorizations.
There are exceptions. Some language specifications spell out that implementations must include a compilation facility; for
example, Common Lisp. Other languages have features that are very easy to implement in an interpreter, but make
writing a compiler much harder; for example, APL, SNOBOL4, and many scripting languages allow programs to
construct arbitrary source code at runtime with regular string operations, and then execute that code by passing it to a
special evaluation function. To implement these features in a compiled language, programs must usually be shipped with
a runtime library that includes a version of the compiler itself.
[edit] Hardware compilation
The output of some compilers may target hardware at a very low level. For example a Field Programmable Gate Array
(FPGA) or structured Application-specific integrated circuit (ASIC). Such compilers are said to be hardware compilers or
synthesis tools because the programs they compile effectively control the final configuration of the hardware and how it
operates; the output of the compilation are not instructions that are executed in sequence - only an interconnection of
transistors or lookup tables. For example, XST is the Xilinx Synthesis Tool used for configuring FPGAs. Similar tools are
available from Altera, Synplicity, Synopsys and other vendors.
[edit] Compiler design
The approach taken to compiler design is affected by the complexity of the processing that needs to be done, the
experience of the person(s) designing it, and the resources (eg, people and tools) available.
A compiler for a relatively simple language written by one person might be a single, monolithic piece of software. When
the source language is large and complex, and high quality output is required the design may be split into a number of
relatively independent phases, or passes. Having separate phases means development can be parceled up into small parts
and given to different people. It also becomes much easier to replace a single phase by an improved one, or to insert new
phases later (eg, additional optimizations).
The division of the compilation processes in phases (or passes) was championed by the Production Quality Compiler-
Compiler Project (PQCC) at Carnegie Mellon University. This project introduced the terms front end, middle end, and
[Link]
back end.
All but the smallest of compilers have more than two phases. However, these phases are usually regarded as being part of
the front end or the back end. The point at where these two ends meet is always open to debate. The front end is generally
considered to be where syntactic and semantic processing takes place, along with translation to a lower level of
representation (than source code).
The middle end is usually designed to perform optimizations on a form other than the source code or machine code. This
source code/machine code independence is intended to enable generic optimizations to be shared between versions of the
compiler supporting different languages and target processors.
The back end takes the output from the middle. It may perform more analysis, transformations and optimizations that are
for a particular computer. Then, it generates code for a particular processor and OS.
This front-end/middle/back-end approach makes it possible to combine front ends for different languages with back ends
for different CPUs. Practical examples of this approach are the GNU Compiler Collection, LLVM, and the Amsterdam
Compiler Kit, which have multiple front-ends, shared analysis and multiple back-ends.
[edit] One-pass versus multi-pass compilers
Classifying compilers by number of passes has its background in the hardware resource limitations of computers.
Compiling involves performing lots of work and early computers did not have enough memory to contain one program
that did all of this work. So compilers were split up into smaller programs which each made a pass over the source (or
some representation of it) performing some of the required analysis and translations.
The ability to compile in a single pass is often seen as a benefit because it simplifies the job of writing a compiler and one
pass compilers are generally faster than multi-pass compilers. Many languages were designed so that they could be
compiled in a single pass (e.g., Pascal).
In some cases the design of a language feature may require a compiler to perform more than one pass over the source. For
instance, consider a declaration appearing on line 20 of the source which affects the translation of a statement appearing
on line 10. In this case, the first pass needs to gather information about declarations appearing after statements that they
affect, with the actual translation happening during a subsequent pass.
The disadvantage of compiling in a single pass is that it is not possible to perform many of the sophisticated optimizations
needed to generate high quality code. It can be difficult to count exactly how many passes an optimizing compiler makes.
For instance, different phases of optimization may analyse one expression many times but only analyse another
expression once.
Splitting a compiler up into small programs is a technique used by researchers interested in producing provably correct
compilers. Proving the correctness of a set of small programs often requires less effort than proving the correctness of a
larger, single, equivalent program.
While the typical multi-pass compiler outputs machine code from its final pass, there are several other types:
A "source-to-source compiler" is a type of compiler that takes a high level language as its input and outputs a high
level language. For example, an automatic parallelizing compiler will frequently take in a high level language
program as an input and then transform the code and annotate it with parallel code annotations (e.g. OpenMP) or
language constructs (e.g. Fortran's DOALL statements).
Stage compiler that compiles to assembly language of a theoretical machine, like some Prolog implementations
This Prolog machine is also known as the Warren Abstract Machine (or WAM). Bytecode compilers for Java,
Python, and many more are also a subtype of this.
Just-in-time compiler, used by Smalltalk and Java systems, and also by Microsoft .Net's Common Intermediate
Language (CIL)
Applications are delivered in bytecode, which is compiled to native machine code just prior to execution.
[Link]
[edit] Front end
The front end analyzes the source code to build an internal representation of the program, called the intermediate
representation or IR. It also manages the symbol table, a data structure mapping each symbol in the source code to
associated information such as location, type and scope. This is done over several phases, which includes some of the
following:
1. Line reconstruction. Languages which strop their keywords or allow arbitrary spaces within identifiers require a
phase before parsing, which converts the input character sequence to a canonical form ready for the parser. The top-
down, recursive-descent, table-driven parsers used in the 1960s typically read the source one character at a time and
did not require a separate tokenizing phase. Atlas Autocode, and Imp (and some implementations of Algol and
Coral66) are examples of stropped languages whose compilers would have a Line Reconstruction phase.
2. Lexical analysis breaks the source code text into small pieces called tokens. Each token is a single atomic unit of
the language, for instance a keyword, identifier or symbol name. The token syntax is typically a regular language,
so a finite state automaton constructed from a regular expression can be used to recognize it. This phase is also
called lexing or scanning, and the software doing lexical analysis is called a lexical analyzer or scanner.
3. Preprocessing. Some languages, e.g., C, require a preprocessing phase which supports macro substitution and
conditional compilation. Typically the preprocessing phase occurs before syntactic or semantic analysis; e.g. in the
case of C, the preprocessor manipulates lexical tokens rather than syntactic forms. However, some languages such
as Scheme support macro substitutions based on syntactic forms.
4. Syntax analysis involves parsing the token sequence to identify the syntactic structure of the program. This phase
typically builds a parse tree, which replaces the linear sequence of tokens with a tree structure built according to the
rules of a formal grammar which define the language's syntax. The parse tree is often analyzed, augmented, and
transformed by later phases in the compiler.
5. Semantic analysis is the phase in which the compiler adds semantic information to the parse tree and builds the
symbol table. This phase performs semantic checks such as type checking (checking for type errors), or object
binding (associating variable and function references with their definitions), or definite assignment (requiring all
local variables to be initialized before use), rejecting incorrect programs or issuing warnings. Semantic analysis
usually requires a complete parse tree, meaning that this phase logically follows the parsing phase, and logically
precedes the code generation phase, though it is often possible to fold multiple phases into one pass over the code in
a compiler implementation.
[edit] Back end
The term back end is sometimes confused with code generator because of the overlapped functionality of generating
assembly code. Some literature uses middle end to distinguish the generic analysis and optimization phases in the back
end from the machine-dependent code generators.
The main phases of the back end include the following:
1. Analysis: This is the gathering of program information from the intermediate representation derived from the input.
Typical analyses are data flow analysis to build use-define chains, dependence analysis, alias analysis, pointer
analysis, escape analysis etc. Accurate analysis is the basis for any compiler optimization. The call graph and
control flow graph are usually also built during the analysis phase.
2. Optimization: the intermediate language representation is transformed into functionally equivalent but faster (or
smaller) forms. Popular optimizations are inline expansion, dead code elimination, constant propagation, loop
transformation, register allocation or even automatic parallelization.
3. Code generation: the transformed intermediate language is translated into the output language, usually the native
machine language of the system. This involves resource and storage decisions, such as deciding which variables to
fit into registers and memory and the selection and scheduling of appropriate machine instructions along with their
associated addressing modes (see also Sethi-Ullman algorithm).
Compiler analysis is the prerequisite for any compiler optimization, and they tightly work together. For example,
dependence analysis is crucial for loop transformation.
In addition, the scope of compiler analysis and optimizations vary greatly, from as small as a basic block to the
[Link]
procedure/function level, or even over the whole program (interprocedural optimization). Obviously, a compiler can
potentially do a better job using a broader view. But that broad view is not free: large scope analysis and optimizations are
very costly in terms of compilation time and memory space; this is especially true for interprocedural analysis and
optimizations.
Interprocedural analysis and optimizations are common in modern commercial compilers from HP, IBM, SGI, Intel,
Microsoft, and Sun Microsystems. The open source GCC was criticized for a long time for lacking powerful
interprocedural optimizations, but it is changing in this respect. Another open source compiler with full analysis and
optimization infrastructure is Open64, which is used by many organizations for research and commercial purposes.
Due to the extra time and space needed for compiler analysis and optimizations, some compilers skip them by default.
Users have to use compilation options to explicitly tell the compiler which optimizations should be enabled.
[edit] Related techniques
Assembly language is not a high-level language and a program that compiles it is more commonly known as an
assembler, with the inverse program known as a disassembler.
A program that translates from a low level language to a higher level one is a decompiler.
A program that translates between high-level languages is usually called a language translator, source to source
translator, language converter, or language rewriter. The last term is usually applied to translations that do not involve a
change of language.
[edit] See also
List of compilers
Abstract interpretation
Attribute grammar
Bottom-up parsing
Error avalanche
Just-in-time compilation
Linker
List of important publications in computer science#Compilers
Metacompilation
Semantics encoding
[edit] Notes
1. ^ IP: "The World's First COBOL Compilers" -- 12 June 1997
2. ^ T. Hart and M. Levin "The New Compiler", AIM-39 CSAIL Digital Archive - Artificial Intelligence Laboratory
Series
3. ^ "The PL/0 compiler/interpreter"
4. ^ Book description at the ACM Digital Library
[edit] References
Compiler textbook references A collection of references to mainstream Compiler Construction Textbooks
Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman (ISBN 0-
201-10088-6) link to publisher. Also known as “The Dragon Book.”
Advanced Compiler Design and Implementation by Steven Muchnick (ISBN 1-55860-320-4).
Engineering a Compiler by Keith D. Cooper and Linda Torczon. Morgan Kaufmann 2004, ISBN 1-55860-699-
8.
Understanding and Writing Compilers: A Do It Yourself Guide (ISBN 0-333-21732-2) by Richard Bornat. An
[Link]
on-line version of the book.
An Overview of the Production Quality Compiler-Compiler Project by Leverett, Cattel, Hobbs, Newcomer,
Reiner, Schatz and Wulf. Computer 13(8):38-49 (August 1980)
Compiler Construction by Niklaus Wirth (ISBN 0-201-40353-6) Addison-Wesley 1996, 176 pages, [1].
"Programming Language Pragmatics" by Michael Scott (ISBN 0-12-633951-1) Morgan Kaufmann 2005, 2nd
edition, 912 pages. The author's site.
"A History of Language Processor Technology in IBM", by F.E. Allen, IBM Journal of Research and
Development, v.25, no.5, September 1981.
[edit] External links
Look up compiler in
Wiktionary, the free dictionary.
SisterWikibooks has a book on the topic of
project
Compiler construction
The [Link] newsgroup and RSS feed
Hardware compilation mailing list
Retrieved from "[Link]
Categories: Compilers | Compiler theory | Computer libraries | Programming language implementation
Views
Article
Discussion
Edit this page
History
Personal tools
Log in / create account
Navigation
Main page
Contents
Featured content
Current events
Random article
Search
Go Search
Interaction
About Wikipedia
Community portal
Recent changes
Contact Wikipedia
Donate to Wikipedia
Help
Toolbox
[Link]
What links here
Related changes
Upload file
Special pages
Printable version
Permanent link
Cite this page
Languages
Afrikaans
اﻟﻌﺮﺑﯿﺔ
Aragonés
Asturianu
Bosanski
Български
Català
Česky
Dansk
Deutsch
Eesti
Ελληνικά
Español
Esperanto
ﻓﺎرﺳﯽ
Français
Galego
한국어
Hrvatski
Bahasa Indonesia
Íslenska
Italiano
עברית
ქართული
Lietuvių
Magyar
Македонски
Nederlands
日本語
Norsk (bokmål)
Polski
Português
Русский
Simple English
Српски / Srpski
Suomi
Svenska
தமி
ไทย
Tiếng Việt
Türkçe
Українська
ייִדיש
中文
[Link]
Powered by MediaWiki
Wikimedia Foundation
This page was last modified on 6 December 2008, at 15:50.
All text is available under the terms of the GNU Free Documentation License. (See Copyrights for details.)
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a U.S. registered 501(c)(3) tax-deductible
nonprofit charity.
Privacy policy
About Wikipedia
Disclaimers
[Link]