summaryrefslogtreecommitdiff
path: root/docs/tutorial
diff options
context:
space:
mode:
authorDimitry Andric <dim@FreeBSD.org>2017-12-18 20:10:56 +0000
committerDimitry Andric <dim@FreeBSD.org>2017-12-18 20:10:56 +0000
commit044eb2f6afba375a914ac9d8024f8f5142bb912e (patch)
tree1475247dc9f9fe5be155ebd4c9069c75aadf8c20 /docs/tutorial
parenteb70dddbd77e120e5d490bd8fbe7ff3f8fa81c6b (diff)
Notes
Diffstat (limited to 'docs/tutorial')
-rw-r--r--docs/tutorial/BuildingAJIT1.rst234
-rw-r--r--docs/tutorial/BuildingAJIT2.rst77
-rw-r--r--docs/tutorial/BuildingAJIT3.rst33
-rw-r--r--docs/tutorial/BuildingAJIT4.rst2
-rw-r--r--docs/tutorial/BuildingAJIT5.rst4
5 files changed, 186 insertions, 164 deletions
diff --git a/docs/tutorial/BuildingAJIT1.rst b/docs/tutorial/BuildingAJIT1.rst
index 88f7aa5abbc70..9d7f50477836e 100644
--- a/docs/tutorial/BuildingAJIT1.rst
+++ b/docs/tutorial/BuildingAJIT1.rst
@@ -75,8 +75,7 @@ will look like:
std::unique_ptr<Module> M = buildModule();
JIT J;
Handle H = J.addModule(*M);
- int (*Main)(int, char*[]) =
- (int(*)(int, char*[])J.findSymbol("main").getAddress();
+ int (*Main)(int, char*[]) = (int(*)(int, char*[]))J.getSymbolAddress("main");
int Result = Main();
J.removeModule(H);
@@ -111,14 +110,24 @@ usual include guards and #includes [2]_, we get to the definition of our class:
#ifndef LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H
#define LLVM_EXECUTIONENGINE_ORC_KALEIDOSCOPEJIT_H
+ #include "llvm/ADT/STLExtras.h"
#include "llvm/ExecutionEngine/ExecutionEngine.h"
+ #include "llvm/ExecutionEngine/JITSymbol.h"
#include "llvm/ExecutionEngine/RTDyldMemoryManager.h"
+ #include "llvm/ExecutionEngine/SectionMemoryManager.h"
#include "llvm/ExecutionEngine/Orc/CompileUtils.h"
#include "llvm/ExecutionEngine/Orc/IRCompileLayer.h"
#include "llvm/ExecutionEngine/Orc/LambdaResolver.h"
- #include "llvm/ExecutionEngine/Orc/ObjectLinkingLayer.h"
+ #include "llvm/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.h"
+ #include "llvm/IR/DataLayout.h"
#include "llvm/IR/Mangler.h"
#include "llvm/Support/DynamicLibrary.h"
+ #include "llvm/Support/raw_ostream.h"
+ #include "llvm/Target/TargetMachine.h"
+ #include <algorithm>
+ #include <memory>
+ #include <string>
+ #include <vector>
namespace llvm {
namespace orc {
@@ -127,38 +136,39 @@ usual include guards and #includes [2]_, we get to the definition of our class:
private:
std::unique_ptr<TargetMachine> TM;
const DataLayout DL;
- ObjectLinkingLayer<> ObjectLayer;
- IRCompileLayer<decltype(ObjectLayer)> CompileLayer;
+ RTDyldObjectLinkingLayer ObjectLayer;
+ IRCompileLayer<decltype(ObjectLayer), SimpleCompiler> CompileLayer;
public:
- typedef decltype(CompileLayer)::ModuleSetHandleT ModuleHandleT;
+ using ModuleHandle = decltype(CompileLayer)::ModuleHandleT;
-Our class begins with four members: A TargetMachine, TM, which will be used
-to build our LLVM compiler instance; A DataLayout, DL, which will be used for
+Our class begins with four members: A TargetMachine, TM, which will be used to
+build our LLVM compiler instance; A DataLayout, DL, which will be used for
symbol mangling (more on that later), and two ORC *layers*: an
-ObjectLinkingLayer and a IRCompileLayer. We'll be talking more about layers in
-the next chapter, but for now you can think of them as analogous to LLVM
+RTDyldObjectLinkingLayer and a CompileLayer. We'll be talking more about layers
+in the next chapter, but for now you can think of them as analogous to LLVM
Passes: they wrap up useful JIT utilities behind an easy to compose interface.
-The first layer, ObjectLinkingLayer, is the foundation of our JIT: it takes
-in-memory object files produced by a compiler and links them on the fly to make
-them executable. This JIT-on-top-of-a-linker design was introduced in MCJIT,
-however the linker was hidden inside the MCJIT class. In ORC we expose the
-linker so that clients can access and configure it directly if they need to. In
-this tutorial our ObjectLinkingLayer will just be used to support the next layer
-in our stack: the IRCompileLayer, which will be responsible for taking LLVM IR,
-compiling it, and passing the resulting in-memory object files down to the
-object linking layer below.
+The first layer, ObjectLayer, is the foundation of our JIT: it takes in-memory
+object files produced by a compiler and links them on the fly to make them
+executable. This JIT-on-top-of-a-linker design was introduced in MCJIT, however
+the linker was hidden inside the MCJIT class. In ORC we expose the linker so
+that clients can access and configure it directly if they need to. In this
+tutorial our ObjectLayer will just be used to support the next layer in our
+stack: the CompileLayer, which will be responsible for taking LLVM IR, compiling
+it, and passing the resulting in-memory object files down to the object linking
+layer below.
That's it for member variables, after that we have a single typedef:
-ModuleHandleT. This is the handle type that will be returned from our JIT's
+ModuleHandle. This is the handle type that will be returned from our JIT's
addModule method, and can be passed to the removeModule method to remove a
module. The IRCompileLayer class already provides a convenient handle type
-(IRCompileLayer::ModuleSetHandleT), so we just alias our ModuleHandleT to this.
+(IRCompileLayer::ModuleHandleT), so we just alias our ModuleHandle to this.
.. code-block:: c++
KaleidoscopeJIT()
: TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()),
+ ObjectLayer([]() { return std::make_shared<SectionMemoryManager>(); }),
CompileLayer(ObjectLayer, SimpleCompiler(*TM)) {
llvm::sys::DynamicLibrary::LoadLibraryPermanently(nullptr);
}
@@ -166,17 +176,22 @@ module. The IRCompileLayer class already provides a convenient handle type
TargetMachine &getTargetMachine() { return *TM; }
Next up we have our class constructor. We begin by initializing TM using the
-EngineBuilder::selectTarget helper method, which constructs a TargetMachine for
-the current process. Next we use our newly created TargetMachine to initialize
-DL, our DataLayout. Then we initialize our IRCompileLayer. Our IRCompile layer
-needs two things: (1) A reference to our object linking layer, and (2) a
-compiler instance to use to perform the actual compilation from IR to object
-files. We use the off-the-shelf SimpleCompiler instance for now. Finally, in
-the body of the constructor, we call the DynamicLibrary::LoadLibraryPermanently
-method with a nullptr argument. Normally the LoadLibraryPermanently method is
-called with the path of a dynamic library to load, but when passed a null
-pointer it will 'load' the host process itself, making its exported symbols
-available for execution.
+EngineBuilder::selectTarget helper method which constructs a TargetMachine for
+the current process. Then we use our newly created TargetMachine to initialize
+DL, our DataLayout. After that we need to initialize our ObjectLayer. The
+ObjectLayer requires a function object that will build a JIT memory manager for
+each module that is added (a JIT memory manager manages memory allocations,
+memory permissions, and registration of exception handlers for JIT'd code). For
+this we use a lambda that returns a SectionMemoryManager, an off-the-shelf
+utility that provides all the basic memory management functionality required for
+this chapter. Next we initialize our CompileLayer. The CompileLayer needs two
+things: (1) A reference to our object layer, and (2) a compiler instance to use
+to perform the actual compilation from IR to object files. We use the
+off-the-shelf SimpleCompiler instance for now. Finally, in the body of the
+constructor, we call the DynamicLibrary::LoadLibraryPermanently method with a
+nullptr argument. Normally the LoadLibraryPermanently method is called with the
+path of a dynamic library to load, but when passed a null pointer it will 'load'
+the host process itself, making its exported symbols available for execution.
.. code-block:: c++
@@ -191,48 +206,36 @@ available for execution.
return Sym;
return JITSymbol(nullptr);
},
- [](const std::string &S) {
+ [](const std::string &Name) {
if (auto SymAddr =
RTDyldMemoryManager::getSymbolAddressInProcess(Name))
return JITSymbol(SymAddr, JITSymbolFlags::Exported);
return JITSymbol(nullptr);
});
- // Build a singleton module set to hold our module.
- std::vector<std::unique_ptr<Module>> Ms;
- Ms.push_back(std::move(M));
-
// Add the set to the JIT with the resolver we created above and a newly
// created SectionMemoryManager.
- return CompileLayer.addModuleSet(std::move(Ms),
- make_unique<SectionMemoryManager>(),
- std::move(Resolver));
+ return cantFail(CompileLayer.addModule(std::move(M),
+ std::move(Resolver)));
}
Now we come to the first of our JIT API methods: addModule. This method is
responsible for adding IR to the JIT and making it available for execution. In
this initial implementation of our JIT we will make our modules "available for
-execution" by adding them straight to the IRCompileLayer, which will
-immediately compile them. In later chapters we will teach our JIT to be lazier
-and instead add the Modules to a "pending" list to be compiled if and when they
-are first executed.
-
-To add our module to the IRCompileLayer we need to supply two auxiliary objects
-(as well as the module itself): a memory manager and a symbol resolver. The
-memory manager will be responsible for managing the memory allocated to JIT'd
-machine code, setting memory permissions, and registering exception handling
-tables (if the JIT'd code uses exceptions). For our memory manager we will use
-the SectionMemoryManager class: another off-the-shelf utility that provides all
-the basic functionality we need. The second auxiliary class, the symbol
-resolver, is more interesting for us. It exists to tell the JIT where to look
-when it encounters an *external symbol* in the module we are adding. External
+execution" by adding them straight to the CompileLayer, which will immediately
+compile them. In later chapters we will teach our JIT to defer compilation
+of individual functions until they're actually called.
+
+To add our module to the CompileLayer we need to supply both the module and a
+symbol resolver. The symbol resolver is responsible for supplying the JIT with
+an address for each *external symbol* in the module we are adding. External
symbols are any symbol not defined within the module itself, including calls to
functions outside the JIT and calls to functions defined in other modules that
-have already been added to the JIT. It may seem as though modules added to the
-JIT should "know about one another" by default, but since we would still have to
+have already been added to the JIT. (It may seem as though modules added to the
+JIT should know about one another by default, but since we would still have to
supply a symbol resolver for references to code outside the JIT it turns out to
-be easier to just re-use this one mechanism for all symbol resolution. This has
-the added benefit that the user has full control over the symbol resolution
+be easier to re-use this one mechanism for all symbol resolution.) This has the
+added benefit that the user has full control over the symbol resolution
process. Should we search for definitions within the JIT first, then fall back
on external definitions? Or should we prefer external definitions where
available and only JIT code if we don't already have an available
@@ -263,12 +266,13 @@ symbol definition via either of these paths, the JIT will refuse to accept our
module, returning a "symbol not found" error.
Now that we've built our symbol resolver, we're ready to add our module to the
-JIT. We do this by calling the CompileLayer's addModuleSet method [4]_. Since
-we only have a single Module and addModuleSet expects a collection, we will
-create a vector of modules and add our module as the only member. Since we
-have already typedef'd our ModuleHandleT type to be the same as the
-CompileLayer's handle type, we can return the handle from addModuleSet
-directly from our addModule method.
+JIT. We do this by calling the CompileLayer's addModule method. The addModule
+method returns an ``Expected<CompileLayer::ModuleHandle>``, since in more
+advanced JIT configurations it could fail. In our basic configuration we know
+that it will always succeed so we use the cantFail utility to assert that no
+error occurred, and extract the handle value. Since we have already typedef'd
+our ModuleHandle type to be the same as the CompileLayer's handle type, we can
+return the unwrapped handle directly.
.. code-block:: c++
@@ -279,19 +283,29 @@ directly from our addModule method.
return CompileLayer.findSymbol(MangledNameStream.str(), true);
}
+ JITTargetAddress getSymbolAddress(const std::string Name) {
+ return cantFail(findSymbol(Name).getAddress());
+ }
+
void removeModule(ModuleHandle H) {
- CompileLayer.removeModuleSet(H);
+ cantFail(CompileLayer.removeModule(H));
}
Now that we can add code to our JIT, we need a way to find the symbols we've
-added to it. To do that we call the findSymbol method on our IRCompileLayer,
-but with a twist: We have to *mangle* the name of the symbol we're searching
-for first. The reason for this is that the ORC JIT components use mangled
-symbols internally the same way a static compiler and linker would, rather
-than using plain IR symbol names. The kind of mangling will depend on the
-DataLayout, which in turn depends on the target platform. To allow us to
-remain portable and search based on the un-mangled name, we just re-produce
-this mangling ourselves.
+added to it. To do that we call the findSymbol method on our CompileLayer, but
+with a twist: We have to *mangle* the name of the symbol we're searching for
+first. The ORC JIT components use mangled symbols internally the same way a
+static compiler and linker would, rather than using plain IR symbol names. This
+allows JIT'd code to interoperate easily with precompiled code in the
+application or shared libraries. The kind of mangling will depend on the
+DataLayout, which in turn depends on the target platform. To allow us to remain
+portable and search based on the un-mangled name, we just re-produce this
+mangling ourselves.
+
+Next we have a convenience function, getSymbolAddress, which returns the address
+of a given symbol. Like CompileLayer's addModule function, JITSymbol's getAddress
+function is allowed to fail [4]_, however we know that it will not in our simple
+example, so we wrap it in a call to cantFail.
We now come to the last method in our JIT API: removeModule. This method is
responsible for destructing the MemoryManager and SymbolResolver that were
@@ -302,7 +316,10 @@ treated as a duplicate definition when the next top-level expression is
entered. It is generally good to free any module that you know you won't need
to call further, just to free up the resources dedicated to it. However, you
don't strictly need to do this: All resources will be cleaned up when your
-JIT class is destructed, if they haven't been freed before then.
+JIT class is destructed, if they haven't been freed before then. Like
+``CompileLayer::addModule`` and ``JITSymbol::getAddress``, removeModule may
+fail in general but will never fail in our example, so we wrap it in a call to
+cantFail.
This brings us to the end of Chapter 1 of Building a JIT. You now have a basic
but fully functioning JIT stack that you can use to take LLVM IR and make it
@@ -321,7 +338,7 @@ example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
@@ -337,36 +354,45 @@ Here is the code:
left as an exercise for the reader. (The KaleidoscopeJIT.h used in the
original tutorials will be a helpful reference).
-.. [2] +-----------------------+-----------------------------------------------+
- | File | Reason for inclusion |
- +=======================+===============================================+
- | ExecutionEngine.h | Access to the EngineBuilder::selectTarget |
- | | method. |
- +-----------------------+-----------------------------------------------+
- | | Access to the |
- | RTDyldMemoryManager.h | RTDyldMemoryManager::getSymbolAddressInProcess|
- | | method. |
- +-----------------------+-----------------------------------------------+
- | CompileUtils.h | Provides the SimpleCompiler class. |
- +-----------------------+-----------------------------------------------+
- | IRCompileLayer.h | Provides the IRCompileLayer class. |
- +-----------------------+-----------------------------------------------+
- | | Access the createLambdaResolver function, |
- | LambdaResolver.h | which provides easy construction of symbol |
- | | resolvers. |
- +-----------------------+-----------------------------------------------+
- | ObjectLinkingLayer.h | Provides the ObjectLinkingLayer class. |
- +-----------------------+-----------------------------------------------+
- | Mangler.h | Provides the Mangler class for platform |
- | | specific name-mangling. |
- +-----------------------+-----------------------------------------------+
- | DynamicLibrary.h | Provides the DynamicLibrary class, which |
- | | makes symbols in the host process searchable. |
- +-----------------------+-----------------------------------------------+
+.. [2] +-----------------------------+-----------------------------------------------+
+ | File | Reason for inclusion |
+ +=============================+===============================================+
+ | STLExtras.h | LLVM utilities that are useful when working |
+ | | with the STL. |
+ +-----------------------------+-----------------------------------------------+
+ | ExecutionEngine.h | Access to the EngineBuilder::selectTarget |
+ | | method. |
+ +-----------------------------+-----------------------------------------------+
+ | | Access to the |
+ | RTDyldMemoryManager.h | RTDyldMemoryManager::getSymbolAddressInProcess|
+ | | method. |
+ +-----------------------------+-----------------------------------------------+
+ | CompileUtils.h | Provides the SimpleCompiler class. |
+ +-----------------------------+-----------------------------------------------+
+ | IRCompileLayer.h | Provides the IRCompileLayer class. |
+ +-----------------------------+-----------------------------------------------+
+ | | Access the createLambdaResolver function, |
+ | LambdaResolver.h | which provides easy construction of symbol |
+ | | resolvers. |
+ +-----------------------------+-----------------------------------------------+
+ | RTDyldObjectLinkingLayer.h | Provides the RTDyldObjectLinkingLayer class. |
+ +-----------------------------+-----------------------------------------------+
+ | Mangler.h | Provides the Mangler class for platform |
+ | | specific name-mangling. |
+ +-----------------------------+-----------------------------------------------+
+ | DynamicLibrary.h | Provides the DynamicLibrary class, which |
+ | | makes symbols in the host process searchable. |
+ +-----------------------------+-----------------------------------------------+
+ | | A fast output stream class. We use the |
+ | raw_ostream.h | raw_string_ostream subclass for symbol |
+ | | mangling |
+ +-----------------------------+-----------------------------------------------+
+ | TargetMachine.h | LLVM target machine description class. |
+ +-----------------------------+-----------------------------------------------+
.. [3] Actually they don't have to be lambdas, any object with a call operator
will do, including plain old functions or std::functions.
-.. [4] ORC layers accept sets of Modules, rather than individual ones, so that
- all Modules in the set could be co-located by the memory manager, though
- this feature is not yet implemented.
+.. [4] ``JITSymbol::getAddress`` will force the JIT to compile the definition of
+ the symbol if it hasn't already been compiled, and since the compilation
+ process could fail getAddress must be able to return this failure.
diff --git a/docs/tutorial/BuildingAJIT2.rst b/docs/tutorial/BuildingAJIT2.rst
index 2f22bdad6c141..f1861033cc795 100644
--- a/docs/tutorial/BuildingAJIT2.rst
+++ b/docs/tutorial/BuildingAJIT2.rst
@@ -46,7 +46,7 @@ Chapter 1 and compose an ORC *IRTransformLayer* on top. We will look at how the
IRTransformLayer works in more detail below, but the interface is simple: the
constructor for this layer takes a reference to the layer below (as all layers
do) plus an *IR optimization function* that it will apply to each Module that
-is added via addModuleSet:
+is added via addModule:
.. code-block:: c++
@@ -54,19 +54,20 @@ is added via addModuleSet:
private:
std::unique_ptr<TargetMachine> TM;
const DataLayout DL;
- ObjectLinkingLayer<> ObjectLayer;
+ RTDyldObjectLinkingLayer<> ObjectLayer;
IRCompileLayer<decltype(ObjectLayer)> CompileLayer;
- typedef std::function<std::unique_ptr<Module>(std::unique_ptr<Module>)>
- OptimizeFunction;
+ using OptimizeFunction =
+ std::function<std::shared_ptr<Module>(std::shared_ptr<Module>)>;
IRTransformLayer<decltype(CompileLayer), OptimizeFunction> OptimizeLayer;
public:
- typedef decltype(OptimizeLayer)::ModuleSetHandleT ModuleHandle;
+ using ModuleHandle = decltype(OptimizeLayer)::ModuleHandleT;
KaleidoscopeJIT()
: TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()),
+ ObjectLayer([]() { return std::make_shared<SectionMemoryManager>(); }),
CompileLayer(ObjectLayer, SimpleCompiler(*TM)),
OptimizeLayer(CompileLayer,
[this](std::unique_ptr<Module> M) {
@@ -101,9 +102,8 @@ define below.
.. code-block:: c++
// ...
- return OptimizeLayer.addModuleSet(std::move(Ms),
- make_unique<SectionMemoryManager>(),
- std::move(Resolver));
+ return cantFail(OptimizeLayer.addModule(std::move(M),
+ std::move(Resolver)));
// ...
.. code-block:: c++
@@ -115,17 +115,17 @@ define below.
.. code-block:: c++
// ...
- OptimizeLayer.removeModuleSet(H);
+ cantFail(OptimizeLayer.removeModule(H));
// ...
Next we need to replace references to 'CompileLayer' with references to
OptimizeLayer in our key methods: addModule, findSymbol, and removeModule. In
addModule we need to be careful to replace both references: the findSymbol call
-inside our resolver, and the call through to addModuleSet.
+inside our resolver, and the call through to addModule.
.. code-block:: c++
- std::unique_ptr<Module> optimizeModule(std::unique_ptr<Module> M) {
+ std::shared_ptr<Module> optimizeModule(std::shared_ptr<Module> M) {
// Create a function pass manager.
auto FPM = llvm::make_unique<legacy::FunctionPassManager>(M.get());
@@ -166,37 +166,30 @@ implementations of the layer concept that can be devised:
template <typename BaseLayerT, typename TransformFtor>
class IRTransformLayer {
public:
- typedef typename BaseLayerT::ModuleSetHandleT ModuleSetHandleT;
+ using ModuleHandleT = typename BaseLayerT::ModuleHandleT;
IRTransformLayer(BaseLayerT &BaseLayer,
TransformFtor Transform = TransformFtor())
: BaseLayer(BaseLayer), Transform(std::move(Transform)) {}
- template <typename ModuleSetT, typename MemoryManagerPtrT,
- typename SymbolResolverPtrT>
- ModuleSetHandleT addModuleSet(ModuleSetT Ms,
- MemoryManagerPtrT MemMgr,
- SymbolResolverPtrT Resolver) {
-
- for (auto I = Ms.begin(), E = Ms.end(); I != E; ++I)
- *I = Transform(std::move(*I));
-
- return BaseLayer.addModuleSet(std::move(Ms), std::move(MemMgr),
- std::move(Resolver));
+ Expected<ModuleHandleT>
+ addModule(std::shared_ptr<Module> M,
+ std::shared_ptr<JITSymbolResolver> Resolver) {
+ return BaseLayer.addModule(Transform(std::move(M)), std::move(Resolver));
}
- void removeModuleSet(ModuleSetHandleT H) { BaseLayer.removeModuleSet(H); }
+ void removeModule(ModuleHandleT H) { BaseLayer.removeModule(H); }
JITSymbol findSymbol(const std::string &Name, bool ExportedSymbolsOnly) {
return BaseLayer.findSymbol(Name, ExportedSymbolsOnly);
}
- JITSymbol findSymbolIn(ModuleSetHandleT H, const std::string &Name,
+ JITSymbol findSymbolIn(ModuleHandleT H, const std::string &Name,
bool ExportedSymbolsOnly) {
return BaseLayer.findSymbolIn(H, Name, ExportedSymbolsOnly);
}
- void emitAndFinalize(ModuleSetHandleT H) {
+ void emitAndFinalize(ModuleHandleT H) {
BaseLayer.emitAndFinalize(H);
}
@@ -215,14 +208,14 @@ comments. It is a template class with two template arguments: ``BaesLayerT`` and
``TransformFtor`` that provide the type of the base layer and the type of the
"transform functor" (in our case a std::function) respectively. This class is
concerned with two very simple jobs: (1) Running every IR Module that is added
-with addModuleSet through the transform functor, and (2) conforming to the ORC
+with addModule through the transform functor, and (2) conforming to the ORC
layer interface. The interface consists of one typedef and five methods:
+------------------+-----------------------------------------------------------+
| Interface | Description |
+==================+===========================================================+
| | Provides a handle that can be used to identify a module |
-| ModuleSetHandleT | set when calling findSymbolIn, removeModuleSet, or |
+| ModuleHandleT | set when calling findSymbolIn, removeModule, or |
| | emitAndFinalize. |
+------------------+-----------------------------------------------------------+
| | Takes a given set of Modules and makes them "available |
@@ -231,28 +224,28 @@ layer interface. The interface consists of one typedef and five methods:
| | the address of the symbols should be read/writable (for |
| | data symbols), or executable (for function symbols) after |
| | JITSymbol::getAddress() is called. Note: This means that |
-| addModuleSet | addModuleSet doesn't have to compile (or do any other |
+| addModule | addModule doesn't have to compile (or do any other |
| | work) up-front. It *can*, like IRCompileLayer, act |
| | eagerly, but it can also simply record the module and |
| | take no further action until somebody calls |
| | JITSymbol::getAddress(). In IRTransformLayer's case |
-| | addModuleSet eagerly applies the transform functor to |
+| | addModule eagerly applies the transform functor to |
| | each module in the set, then passes the resulting set |
| | of mutated modules down to the layer below. |
+------------------+-----------------------------------------------------------+
| | Removes a set of modules from the JIT. Code or data |
-| removeModuleSet | defined in these modules will no longer be available, and |
+| removeModule | defined in these modules will no longer be available, and |
| | the memory holding the JIT'd definitions will be freed. |
+------------------+-----------------------------------------------------------+
| | Searches for the named symbol in all modules that have |
-| | previously been added via addModuleSet (and not yet |
-| findSymbol | removed by a call to removeModuleSet). In |
+| | previously been added via addModule (and not yet |
+| findSymbol | removed by a call to removeModule). In |
| | IRTransformLayer we just pass the query on to the layer |
| | below. In our REPL this is our default way to search for |
| | function definitions. |
+------------------+-----------------------------------------------------------+
| | Searches for the named symbol in the module set indicated |
-| | by the given ModuleSetHandleT. This is just an optimized |
+| | by the given ModuleHandleT. This is just an optimized |
| | search, better for lookup-speed when you know exactly |
| | a symbol definition should be found. In IRTransformLayer |
| findSymbolIn | we just pass this query on to the layer below. In our |
@@ -262,7 +255,7 @@ layer interface. The interface consists of one typedef and five methods:
| | we just added. |
+------------------+-----------------------------------------------------------+
| | Forces all of the actions required to make the code and |
-| | data in a module set (represented by a ModuleSetHandleT) |
+| | data in a module set (represented by a ModuleHandleT) |
| | accessible. Behaves as if some symbol in the set had been |
| | searched for and JITSymbol::getSymbolAddress called. This |
| emitAndFinalize | is rarely needed, but can be useful when dealing with |
@@ -276,11 +269,11 @@ wrinkles like emitAndFinalize for performance), similar to the basic JIT API
operations we identified in Chapter 1. Conforming to the layer concept allows
classes to compose neatly by implementing their behaviors in terms of the these
same operations, carried out on the layer below. For example, an eager layer
-(like IRTransformLayer) can implement addModuleSet by running each module in the
+(like IRTransformLayer) can implement addModule by running each module in the
set through its transform up-front and immediately passing the result to the
-layer below. A lazy layer, by contrast, could implement addModuleSet by
+layer below. A lazy layer, by contrast, could implement addModule by
squirreling away the modules doing no other up-front work, but applying the
-transform (and calling addModuleSet on the layer below) when the client calls
+transform (and calling addModule on the layer below) when the client calls
findSymbol instead. The JIT'd program behavior will be the same either way, but
these choices will have different performance characteristics: Doing work
eagerly means the JIT takes longer up-front, but proceeds smoothly once this is
@@ -319,7 +312,7 @@ IRTransformLayer added to enable optimization. To build this example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
@@ -329,8 +322,8 @@ Here is the code:
:language: c++
.. [1] When we add our top-level expression to the JIT, any calls to functions
- that we defined earlier will appear to the ObjectLinkingLayer as
- external symbols. The ObjectLinkingLayer will call the SymbolResolver
- that we defined in addModuleSet, which in turn calls findSymbol on the
+ that we defined earlier will appear to the RTDyldObjectLinkingLayer as
+ external symbols. The RTDyldObjectLinkingLayer will call the SymbolResolver
+ that we defined in addModule, which in turn calls findSymbol on the
OptimizeLayer, at which point even a lazy transform layer will have to
do its work.
diff --git a/docs/tutorial/BuildingAJIT3.rst b/docs/tutorial/BuildingAJIT3.rst
index 071e92c74541d..9c4e59fe11764 100644
--- a/docs/tutorial/BuildingAJIT3.rst
+++ b/docs/tutorial/BuildingAJIT3.rst
@@ -21,7 +21,7 @@ Lazy Compilation
When we add a module to the KaleidoscopeJIT class from Chapter 2 it is
immediately optimized, compiled and linked for us by the IRTransformLayer,
-IRCompileLayer and ObjectLinkingLayer respectively. This scheme, where all the
+IRCompileLayer and RTDyldObjectLinkingLayer respectively. This scheme, where all the
work to make a Module executable is done up front, is simple to understand and
its performance characteristics are easy to reason about. However, it will lead
to very high startup times if the amount of code to be compiled is large, and
@@ -33,7 +33,7 @@ the ORC APIs provide us with a layer to lazily compile LLVM IR:
*CompileOnDemandLayer*.
The CompileOnDemandLayer class conforms to the layer interface described in
-Chapter 2, but its addModuleSet method behaves quite differently from the layers
+Chapter 2, but its addModule method behaves quite differently from the layers
we have seen so far: rather than doing any work up front, it just scans the
Modules being added and arranges for each function in them to be compiled the
first time it is called. To do this, the CompileOnDemandLayer creates two small
@@ -73,21 +73,22 @@ lazy compilation. We just need a few changes to the source:
private:
std::unique_ptr<TargetMachine> TM;
const DataLayout DL;
- std::unique_ptr<JITCompileCallbackManager> CompileCallbackManager;
- ObjectLinkingLayer<> ObjectLayer;
- IRCompileLayer<decltype(ObjectLayer)> CompileLayer;
+ RTDyldObjectLinkingLayer ObjectLayer;
+ IRCompileLayer<decltype(ObjectLayer), SimpleCompiler> CompileLayer;
- typedef std::function<std::unique_ptr<Module>(std::unique_ptr<Module>)>
- OptimizeFunction;
+ using OptimizeFunction =
+ std::function<std::shared_ptr<Module>(std::shared_ptr<Module>)>;
IRTransformLayer<decltype(CompileLayer), OptimizeFunction> OptimizeLayer;
+
+ std::unique_ptr<JITCompileCallbackManager> CompileCallbackManager;
CompileOnDemandLayer<decltype(OptimizeLayer)> CODLayer;
public:
- typedef decltype(CODLayer)::ModuleSetHandleT ModuleHandle;
+ using ModuleHandle = decltype(CODLayer)::ModuleHandleT;
First we need to include the CompileOnDemandLayer.h header, then add two new
-members: a std::unique_ptr<CompileCallbackManager> and a CompileOnDemandLayer,
+members: a std::unique_ptr<JITCompileCallbackManager> and a CompileOnDemandLayer,
to our class. The CompileCallbackManager member is used by the CompileOnDemandLayer
to create the compile callback needed for each function.
@@ -95,9 +96,10 @@ to create the compile callback needed for each function.
KaleidoscopeJIT()
: TM(EngineBuilder().selectTarget()), DL(TM->createDataLayout()),
+ ObjectLayer([]() { return std::make_shared<SectionMemoryManager>(); }),
CompileLayer(ObjectLayer, SimpleCompiler(*TM)),
OptimizeLayer(CompileLayer,
- [this](std::unique_ptr<Module> M) {
+ [this](std::shared_ptr<Module> M) {
return optimizeModule(std::move(M));
}),
CompileCallbackManager(
@@ -133,7 +135,7 @@ our CompileCallbackManager. Finally, we need to supply an "indirect stubs
manager builder": a utility function that constructs IndirectStubManagers, which
are in turn used to build the stubs for the functions in each module. The
CompileOnDemandLayer will call the indirect stub manager builder once for each
-call to addModuleSet, and use the resulting indirect stubs manager to create
+call to addModule, and use the resulting indirect stubs manager to create
stubs for all functions in all modules in the set. If/when the module set is
removed from the JIT the indirect stubs manager will be deleted, freeing any
memory allocated to the stubs. We supply this function by using the
@@ -144,9 +146,8 @@ createLocalIndirectStubsManagerBuilder utility.
// ...
if (auto Sym = CODLayer.findSymbol(Name, false))
// ...
- return CODLayer.addModuleSet(std::move(Ms),
- make_unique<SectionMemoryManager>(),
- std::move(Resolver));
+ return cantFail(CODLayer.addModule(std::move(Ms),
+ std::move(Resolver)));
// ...
// ...
@@ -154,7 +155,7 @@ createLocalIndirectStubsManagerBuilder utility.
// ...
// ...
- CODLayer.removeModuleSet(H);
+ CODLayer.removeModule(H);
// ...
Finally, we need to replace the references to OptimizeLayer in our addModule,
@@ -173,7 +174,7 @@ layer added to enable lazy function-at-a-time compilation. To build this example
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
diff --git a/docs/tutorial/BuildingAJIT4.rst b/docs/tutorial/BuildingAJIT4.rst
index 39d9198a85c3d..3d3f81e438584 100644
--- a/docs/tutorial/BuildingAJIT4.rst
+++ b/docs/tutorial/BuildingAJIT4.rst
@@ -36,7 +36,7 @@ Kaleidoscope ASTS. To build this example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
# Run
./toy
diff --git a/docs/tutorial/BuildingAJIT5.rst b/docs/tutorial/BuildingAJIT5.rst
index 94ea92ce5ad2b..0fda8610efbf1 100644
--- a/docs/tutorial/BuildingAJIT5.rst
+++ b/docs/tutorial/BuildingAJIT5.rst
@@ -40,8 +40,10 @@ Kaleidoscope ASTS. To build this example, use:
.. code-block:: bash
# Compile
- clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orc native` -O3 -o toy
+ clang++ -g toy.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy
+ clang++ -g Server/server.cpp `llvm-config --cxxflags --ldflags --system-libs --libs core orcjit native` -O3 -o toy-server
# Run
+ ./toy-server &
./toy
Here is the code for the modified KaleidoscopeJIT: