blob: 9bb09fb9481bdb3ba4da282553b2182836925338 [file] [log] [blame]
<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
<chapter id='technical-details'>
<title>Technical Details</title>
<para>
This chapter provides technical details for various parts of the
Yocto Project.
Currently, topics include Yocto Project components,
cross-toolchain generation, shared state (sstate) cache,
x32, Wayland support, and Licenses.
</para>
<section id='usingpoky-components'>
<title>Yocto Project Components</title>
<para>
The
<ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
task executor together with various types of configuration files form
the OpenEmbedded Core.
This section overviews these components by describing their use and
how they interact.
</para>
<para>
BitBake handles the parsing and execution of the data files.
The data itself is of various types:
<itemizedlist>
<listitem><para><emphasis>Recipes:</emphasis> Provides details
about particular pieces of software.
</para></listitem>
<listitem><para><emphasis>Class Data:</emphasis> Abstracts
common build information (e.g. how to build a Linux kernel).
</para></listitem>
<listitem><para><emphasis>Configuration Data:</emphasis> Defines
machine-specific settings, policy decisions, and so forth.
Configuration data acts as the glue to bind everything
together.
</para></listitem>
</itemizedlist>
</para>
<para>
BitBake knows how to combine multiple data sources together and refers
to each data source as a layer.
For information on layers, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and
Creating Layers</ulink>" section of the Yocto Project Development Manual.
</para>
<para>
Following are some brief details on these core components.
For additional information on how these components interact during
a build, see the
"<link linkend='closer-look'>A Closer Look at the Yocto Project Development Environment</link>"
Chapter.
</para>
<section id='usingpoky-components-bitbake'>
<title>BitBake</title>
<para>
BitBake is the tool at the heart of the OpenEmbedded build system
and is responsible for parsing the
<ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>,
generating a list of tasks from it, and then executing those tasks.
</para>
<para>
This section briefly introduces BitBake.
If you want more information on BitBake, see the
<ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual'>BitBake User Manual</ulink>.
</para>
<para>
To see a list of the options BitBake supports, use either of
the following commands:
<literallayout class='monospaced'>
$ bitbake -h
$ bitbake --help
</literallayout>
</para>
<para>
The most common usage for BitBake is <filename>bitbake <replaceable>packagename</replaceable></filename>, where
<filename>packagename</filename> is the name of the package you want to build
(referred to as the "target" in this manual).
The target often equates to the first part of a recipe's filename
(e.g. "foo" for a recipe named
<filename>foo_1.3.0-r0.bb</filename>).
So, to process the <filename>matchbox-desktop_1.2.3.bb</filename> recipe file, you
might type the following:
<literallayout class='monospaced'>
$ bitbake matchbox-desktop
</literallayout>
Several different versions of <filename>matchbox-desktop</filename> might exist.
BitBake chooses the one selected by the distribution configuration.
You can get more details about how BitBake chooses between different
target versions and providers in the
"<ulink url='&YOCTO_DOCS_BB_URL;#bb-bitbake-preferences'>Preferences</ulink>"
section of the BitBake User Manual.
</para>
<para>
BitBake also tries to execute any dependent tasks first.
So for example, before building <filename>matchbox-desktop</filename>, BitBake
would build a cross compiler and <filename>glibc</filename> if they had not already
been built.
</para>
<para>
A useful BitBake option to consider is the <filename>-k</filename> or
<filename>--continue</filename> option.
This option instructs BitBake to try and continue processing the job
as long as possible even after encountering an error.
When an error occurs, the target that
failed and those that depend on it cannot be remade.
However, when you use this option other dependencies can still be
processed.
</para>
</section>
<section id='usingpoky-components-metadata'>
<title>Metadata (Recipes)</title>
<para>
Files that have the <filename>.bb</filename> suffix are "recipes"
files.
In general, a recipe contains information about a single piece of
software.
This information includes the location from which to download the
unaltered source, any source patches to be applied to that source
(if needed), which special configuration options to apply,
how to compile the source files, and how to package the compiled
output.
</para>
<para>
The term "package" is sometimes used to refer to recipes. However,
since the word "package" is used for the packaged output from the OpenEmbedded
build system (i.e. <filename>.ipk</filename> or <filename>.deb</filename> files),
this document avoids using the term "package" when referring to recipes.
</para>
</section>
<section id='usingpoky-components-classes'>
<title>Classes</title>
<para>
Class files (<filename>.bbclass</filename>) contain information that
is useful to share between
<ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> files.
An example is the
<link linkend='ref-classes-autotools'><filename>autotools</filename></link>
class, which contains common settings for any application that
Autotools uses.
The "<link linkend='ref-classes'>Classes</link>" chapter provides
details about classes and how to use them.
</para>
</section>
<section id='usingpoky-components-configuration'>
<title>Configuration</title>
<para>
The configuration files (<filename>.conf</filename>) define various configuration variables
that govern the OpenEmbedded build process.
These files fall into several areas that define machine configuration options,
distribution configuration options, compiler tuning options, general common configuration
options, and user configuration options in <filename>local.conf</filename>, which is found
in the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
</para>
</section>
</section>
<section id="cross-development-toolchain-generation">
<title>Cross-Development Toolchain Generation</title>
<para>
The Yocto Project does most of the work for you when it comes to
creating
<ulink url='&YOCTO_DOCS_DEV_URL;#cross-development-toolchain'>cross-development toolchains</ulink>.
This section provides some technical background on how
cross-development toolchains are created and used.
For more information on toolchains, you can also see the
<ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
</para>
<para>
In the Yocto Project development environment, cross-development
toolchains are used to build the image and applications that run on the
target hardware.
With just a few commands, the OpenEmbedded build system creates
these necessary toolchains for you.
</para>
<para>
The following figure shows a high-level build environment regarding
toolchain construction and use.
</para>
<para>
<imagedata fileref="figures/cross-development-toolchains.png" width="8in" depth="6in" align="center" />
</para>
<para>
Most of the work occurs on the Build Host.
This is the machine used to build images and generally work within the
the Yocto Project environment.
When you run BitBake to create an image, the OpenEmbedded build system
uses the host <filename>gcc</filename> compiler to bootstrap a
cross-compiler named <filename>gcc-cross</filename>.
The <filename>gcc-cross</filename> compiler is what BitBake uses to
compile source files when creating the target image.
You can think of <filename>gcc-cross</filename> simply as an
automatically generated cross-compiler that is used internally within
BitBake only.
<note>
The extensible SDK does not use
<filename>gcc-cross-canadian</filename> since this SDK
ships a copy of the OpenEmbedded build system and the sysroot
within it contains <filename>gcc-cross</filename>.
</note>
</para>
<para>
The chain of events that occurs when <filename>gcc-cross</filename> is
bootstrapped is as follows:
<literallayout class='monospaced'>
gcc -> binutils-cross -> gcc-cross-initial -> linux-libc-headers -> glibc-initial -> glibc -> gcc-cross -> gcc-runtime
</literallayout>
<itemizedlist>
<listitem><para><filename>gcc</filename>:
The build host's GNU Compiler Collection (GCC).
</para></listitem>
<listitem><para><filename>binutils-cross</filename>:
The bare minimum binary utilities needed in order to run
the <filename>gcc-cross-initial</filename> phase of the
bootstrap operation.
</para></listitem>
<listitem><para><filename>gcc-cross-initial</filename>:
An early stage of the bootstrap process for creating
the cross-compiler.
This stage builds enough of the <filename>gcc-cross</filename>,
the C library, and other pieces needed to finish building the
final cross-compiler in later stages.
This tool is a "native" package (i.e. it is designed to run on
the build host).
</para></listitem>
<listitem><para><filename>linux-libc-headers</filename>:
Headers needed for the cross-compiler.
</para></listitem>
<listitem><para><filename>glibc-initial</filename>:
An initial version of the Embedded GLIBC needed to bootstrap
<filename>glibc</filename>.
</para></listitem>
<listitem><para><filename>gcc-cross</filename>:
The final stage of the bootstrap process for the
cross-compiler.
This stage results in the actual cross-compiler that
BitBake uses when it builds an image for a targeted
device.
<note>
If you are replacing this cross compiler toolchain
with a custom version, you must replace
<filename>gcc-cross</filename>.
</note>
This tool is also a "native" package (i.e. it is
designed to run on the build host).
</para></listitem>
<listitem><para><filename>gcc-runtime</filename>:
Runtime libraries resulting from the toolchain bootstrapping
process.
This tool produces a binary that consists of the
runtime libraries need for the targeted device.
</para></listitem>
</itemizedlist>
</para>
<para>
You can use the OpenEmbedded build system to build an installer for
the relocatable SDK used to develop applications.
When you run the installer, it installs the toolchain, which contains
the development tools (e.g., the
<filename>gcc-cross-canadian</filename>),
<filename>binutils-cross-canadian</filename>, and other
<filename>nativesdk-*</filename> tools,
which are tools native to the SDK (i.e. native to
<link linkend='var-SDK_ARCH'><filename>SDK_ARCH</filename></link>),
you need to cross-compile and test your software.
The figure shows the commands you use to easily build out this
toolchain.
This cross-development toolchain is built to execute on the
<link linkend='var-SDKMACHINE'><filename>SDKMACHINE</filename></link>,
which might or might not be the same
machine as the Build Host.
<note>
If your target architecture is supported by the Yocto Project,
you can take advantage of pre-built images that ship with the
Yocto Project and already contain cross-development toolchain
installers.
</note>
</para>
<para>
Here is the bootstrap process for the relocatable toolchain:
<literallayout class='monospaced'>
gcc -> binutils-crosssdk -> gcc-crosssdk-initial -> linux-libc-headers ->
glibc-initial -> nativesdk-glibc -> gcc-crosssdk -> gcc-cross-canadian
</literallayout>
<itemizedlist>
<listitem><para><filename>gcc</filename>:
The build host's GNU Compiler Collection (GCC).
</para></listitem>
<listitem><para><filename>binutils-crosssdk</filename>:
The bare minimum binary utilities needed in order to run
the <filename>gcc-crosssdk-initial</filename> phase of the
bootstrap operation.
</para></listitem>
<listitem><para><filename>gcc-crosssdk-initial</filename>:
An early stage of the bootstrap process for creating
the cross-compiler.
This stage builds enough of the
<filename>gcc-crosssdk</filename> and supporting pieces so that
the final stage of the bootstrap process can produce the
finished cross-compiler.
This tool is a "native" binary that runs on the build host.
</para></listitem>
<listitem><para><filename>linux-libc-headers</filename>:
Headers needed for the cross-compiler.
</para></listitem>
<listitem><para><filename>glibc-initial</filename>:
An initial version of the Embedded GLIBC needed to bootstrap
<filename>nativesdk-glibc</filename>.
</para></listitem>
<listitem><para><filename>nativesdk-glibc</filename>:
The Embedded GLIBC needed to bootstrap the
<filename>gcc-crosssdk</filename>.
</para></listitem>
<listitem><para><filename>gcc-crosssdk</filename>:
The final stage of the bootstrap process for the
relocatable cross-compiler.
The <filename>gcc-crosssdk</filename> is a transitory compiler
and never leaves the build host.
Its purpose is to help in the bootstrap process to create the
eventual relocatable <filename>gcc-cross-canadian</filename>
compiler, which is relocatable.
This tool is also a "native" package (i.e. it is
designed to run on the build host).
</para></listitem>
<listitem><para><filename>gcc-cross-canadian</filename>:
The final relocatable cross-compiler.
When run on the
<link linkend='var-SDKMACHINE'><filename>SDKMACHINE</filename></link>,
this tool
produces executable code that runs on the target device.
Only one cross-canadian compiler is produced per architecture
since they can be targeted at different processor optimizations
using configurations passed to the compiler through the
compile commands.
This circumvents the need for multiple compilers and thus
reduces the size of the toolchains.
</para></listitem>
</itemizedlist>
</para>
<note>
For information on advantages gained when building a
cross-development toolchain installer, see the
"<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer'>Building an SDK Installer</ulink>"
section in the Yocto Project Software Development Kit (SDK) Developer's
Guide.
</note>
</section>
<section id="shared-state-cache">
<title>Shared State Cache</title>
<para>
By design, the OpenEmbedded build system builds everything from scratch unless
BitBake can determine that parts do not need to be rebuilt.
Fundamentally, building from scratch is attractive as it means all parts are
built fresh and there is no possibility of stale data causing problems.
When developers hit problems, they typically default back to building from scratch
so they know the state of things from the start.
</para>
<para>
Building an image from scratch is both an advantage and a disadvantage to the process.
As mentioned in the previous paragraph, building from scratch ensures that
everything is current and starts from a known state.
However, building from scratch also takes much longer as it generally means
rebuilding things that do not necessarily need to be rebuilt.
</para>
<para>
The Yocto Project implements shared state code that supports incremental builds.
The implementation of the shared state code answers the following questions that
were fundamental roadblocks within the OpenEmbedded incremental build support system:
<itemizedlist>
<listitem><para>What pieces of the system have changed and what pieces have
not changed?</para></listitem>
<listitem><para>How are changed pieces of software removed and replaced?</para></listitem>
<listitem><para>How are pre-built components that do not need to be rebuilt from scratch
used when they are available?</para></listitem>
</itemizedlist>
</para>
<para>
For the first question, the build system detects changes in the "inputs" to a given task by
creating a checksum (or signature) of the task's inputs.
If the checksum changes, the system assumes the inputs have changed and the task needs to be
rerun.
For the second question, the shared state (sstate) code tracks which tasks add which output
to the build process.
This means the output from a given task can be removed, upgraded or otherwise manipulated.
The third question is partly addressed by the solution for the second question
assuming the build system can fetch the sstate objects from remote locations and
install them if they are deemed to be valid.
</para>
<note>
The OpenEmbedded build system does not maintain
<link linkend='var-PR'><filename>PR</filename></link> information
as part of the shared state packages.
Consequently, considerations exist that affect maintaining shared
state feeds.
For information on how the OpenEmbedded build system
works with packages and can
track incrementing <filename>PR</filename> information, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#incrementing-a-package-revision-number'>Incrementing a Package Revision Number</ulink>"
section.
</note>
<para>
The rest of this section goes into detail about the overall incremental build
architecture, the checksums (signatures), shared state, and some tips and tricks.
</para>
<section id='overall-architecture'>
<title>Overall Architecture</title>
<para>
When determining what parts of the system need to be built, BitBake
works on a per-task basis rather than a per-recipe basis.
You might wonder why using a per-task basis is preferred over a per-recipe basis.
To help explain, consider having the IPK packaging backend enabled and then switching to DEB.
In this case, the
<link linkend='ref-tasks-install'><filename>do_install</filename></link>
and
<link linkend='ref-tasks-package'><filename>do_package</filename></link>
task outputs are still valid.
However, with a per-recipe approach, the build would not include the
<filename>.deb</filename> files.
Consequently, you would have to invalidate the whole build and rerun it.
Rerunning everything is not the best solution.
Also, in this case, the core must be "taught" much about specific tasks.
This methodology does not scale well and does not allow users to easily add new tasks
in layers or as external recipes without touching the packaged-staging core.
</para>
</section>
<section id='checksums'>
<title>Checksums (Signatures)</title>
<para>
The shared state code uses a checksum, which is a unique signature of a task's
inputs, to determine if a task needs to be run again.
Because it is a change in a task's inputs that triggers a rerun, the process
needs to detect all the inputs to a given task.
For shell tasks, this turns out to be fairly easy because
the build process generates a "run" shell script for each task and
it is possible to create a checksum that gives you a good idea of when
the task's data changes.
</para>
<para>
To complicate the problem, there are things that should not be
included in the checksum.
First, there is the actual specific build path of a given task -
the <link linkend='var-WORKDIR'><filename>WORKDIR</filename></link>.
It does not matter if the work directory changes because it should
not affect the output for target packages.
Also, the build process has the objective of making native
or cross packages relocatable.
<note>
Both native and cross packages run on the build host.
However, cross packages generate output for the target
architecture.
</note>
The checksum therefore needs to exclude
<filename>WORKDIR</filename>.
The simplistic approach for excluding the work directory is to set
<filename>WORKDIR</filename> to some fixed value and create the
checksum for the "run" script.
</para>
<para>
Another problem results from the "run" scripts containing functions that
might or might not get called.
The incremental build solution contains code that figures out dependencies
between shell functions.
This code is used to prune the "run" scripts down to the minimum set,
thereby alleviating this problem and making the "run" scripts much more
readable as a bonus.
</para>
<para>
So far we have solutions for shell scripts.
What about Python tasks?
The same approach applies even though these tasks are more difficult.
The process needs to figure out what variables a Python function accesses
and what functions it calls.
Again, the incremental build solution contains code that first figures out
the variable and function dependencies, and then creates a checksum for the data
used as the input to the task.
</para>
<para>
Like the <filename>WORKDIR</filename> case, situations exist where dependencies
should be ignored.
For these cases, you can instruct the build process to ignore a dependency
by using a line like the following:
<literallayout class='monospaced'>
PACKAGE_ARCHS[vardepsexclude] = "MACHINE"
</literallayout>
This example ensures that the
<link linkend='var-PACKAGE_ARCHS'><filename>PACKAGE_ARCHS</filename></link>
variable does not
depend on the value of
<link linkend='var-MACHINE'><filename>MACHINE</filename></link>,
even if it does reference it.
</para>
<para>
Equally, there are cases where we need to add dependencies BitBake is not able to find.
You can accomplish this by using a line like the following:
<literallayout class='monospaced'>
PACKAGE_ARCHS[vardeps] = "MACHINE"
</literallayout>
This example explicitly adds the <filename>MACHINE</filename> variable as a
dependency for <filename>PACKAGE_ARCHS</filename>.
</para>
<para>
Consider a case with in-line Python, for example, where BitBake is not
able to figure out dependencies.
When running in debug mode (i.e. using <filename>-DDD</filename>), BitBake
produces output when it discovers something for which it cannot figure out
dependencies.
The Yocto Project team has currently not managed to cover those dependencies
in detail and is aware of the need to fix this situation.
</para>
<para>
Thus far, this section has limited discussion to the direct inputs into a task.
Information based on direct inputs is referred to as the "basehash" in the
code.
However, there is still the question of a task's indirect inputs - the
things that were already built and present in the
<ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
The checksum (or signature) for a particular task needs to add the hashes
of all the tasks on which the particular task depends.
Choosing which dependencies to add is a policy decision.
However, the effect is to generate a master checksum that combines the basehash
and the hashes of the task's dependencies.
</para>
<para>
At the code level, there are a variety of ways both the basehash and the
dependent task hashes can be influenced.
Within the BitBake configuration file, we can give BitBake some extra information
to help it construct the basehash.
The following statement effectively results in a list of global variable
dependency excludes - variables never included in any checksum:
<literallayout class='monospaced'>
BB_HASHBASE_WHITELIST ?= "TMPDIR FILE PATH PWD BB_TASKHASH BBPATH DL_DIR \
SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM \
USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX"
</literallayout>
The previous example excludes
<link linkend='var-WORKDIR'><filename>WORKDIR</filename></link>
since that variable is actually constructed as a path within
<link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>, which is on
the whitelist.
</para>
<para>
The rules for deciding which hashes of dependent tasks to include through
dependency chains are more complex and are generally accomplished with a
Python function.
The code in <filename>meta/lib/oe/sstatesig.py</filename> shows two examples
of this and also illustrates how you can insert your own policy into the system
if so desired.
This file defines the two basic signature generators <filename>OE-Core</filename>
uses: "OEBasic" and "OEBasicHash".
By default, there is a dummy "noop" signature handler enabled in BitBake.
This means that behavior is unchanged from previous versions.
<filename>OE-Core</filename> uses the "OEBasicHash" signature handler by default
through this setting in the <filename>bitbake.conf</filename> file:
<literallayout class='monospaced'>
BB_SIGNATURE_HANDLER ?= "OEBasicHash"
</literallayout>
The "OEBasicHash" <filename>BB_SIGNATURE_HANDLER</filename> is the same as the
"OEBasic" version but adds the task hash to the stamp files.
This results in any
<ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
change that changes the task hash, automatically
causing the task to be run again.
This removes the need to bump <link linkend='var-PR'><filename>PR</filename></link>
values, and changes to Metadata automatically ripple across the build.
</para>
<para>
It is also worth noting that the end result of these signature generators is to
make some dependency and hash information available to the build.
This information includes:
<itemizedlist>
<listitem><para><filename>BB_BASEHASH_task-</filename><replaceable>taskname</replaceable>:
The base hashes for each task in the recipe.
</para></listitem>
<listitem><para><filename>BB_BASEHASH_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
The base hashes for each dependent task.
</para></listitem>
<listitem><para><filename>BBHASHDEPS_</filename><replaceable>filename</replaceable><filename>:</filename><replaceable>taskname</replaceable>:
The task dependencies for each task.
</para></listitem>
<listitem><para><filename>BB_TASKHASH</filename>:
The hash of the currently running task.
</para></listitem>
</itemizedlist>
</para>
</section>
<section id='shared-state'>
<title>Shared State</title>
<para>
Checksums and dependencies, as discussed in the previous section, solve half the
problem of supporting a shared state.
The other part of the problem is being able to use checksum information during the build
and being able to reuse or rebuild specific components.
</para>
<para>
The
<link linkend='ref-classes-sstate'><filename>sstate</filename></link>
class is a relatively generic implementation of how to "capture"
a snapshot of a given task.
The idea is that the build process does not care about the source of a task's output.
Output could be freshly built or it could be downloaded and unpacked from
somewhere - the build process does not need to worry about its origin.
</para>
<para>
There are two types of output, one is just about creating a directory
in <link linkend='var-WORKDIR'><filename>WORKDIR</filename></link>.
A good example is the output of either
<link linkend='ref-tasks-install'><filename>do_install</filename></link>
or
<link linkend='ref-tasks-package'><filename>do_package</filename></link>.
The other type of output occurs when a set of data is merged into a shared directory
tree such as the sysroot.
</para>
<para>
The Yocto Project team has tried to keep the details of the
implementation hidden in <filename>sstate</filename> class.
From a user's perspective, adding shared state wrapping to a task
is as simple as this
<link linkend='ref-tasks-deploy'><filename>do_deploy</filename></link>
example taken from the
<link linkend='ref-classes-deploy'><filename>deploy</filename></link>
class:
<literallayout class='monospaced'>
DEPLOYDIR = "${WORKDIR}/deploy-${PN}"
SSTATETASKS += "do_deploy"
do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"
do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
python do_deploy_setscene () {
sstate_setscene(d)
}
addtask do_deploy_setscene
do_deploy[dirs] = "${DEPLOYDIR} ${B}"
</literallayout>
The following list explains the previous example:
<itemizedlist>
<listitem><para>
Adding "do_deploy" to <filename>SSTATETASKS</filename>
adds some required sstate-related processing, which is
implemented in the
<link linkend='ref-classes-sstate'><filename>sstate</filename></link>
class, to before and after the
<link linkend='ref-tasks-deploy'><filename>do_deploy</filename></link>
task.
</para></listitem>
<listitem><para>
The
<filename>do_deploy[sstate-inputdirs] = "${DEPLOYDIR}"</filename>
declares that <filename>do_deploy</filename> places its
output in <filename>${DEPLOYDIR}</filename> when run
normally (i.e. when not using the sstate cache).
This output becomes the input to the shared state cache.
</para></listitem>
<listitem><para>
The
<filename>do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"</filename>
line causes the contents of the shared state cache to be
copied to <filename>${DEPLOY_DIR_IMAGE}</filename>.
<note>
If <filename>do_deploy</filename> is not already in
the shared state cache or if its input checksum
(signature) has changed from when the output was
cached, the task will be run to populate the shared
state cache, after which the contents of the shared
state cache is copied to
<filename>${DEPLOY_DIR_IMAGE}</filename>.
If <filename>do_deploy</filename> is in the shared
state cache and its signature indicates that the
cached output is still valid (i.e. if no
relevant task inputs have changed), then the contents
of the shared state cache will be copied directly to
<filename>${DEPLOY_DIR_IMAGE}</filename> by the
<filename>do_deploy_setscene</filename> task instead,
skipping the <filename>do_deploy</filename> task.
</note>
</para></listitem>
<listitem><para>
The following task definition is glue logic needed to make
the previous settings effective:
<literallayout class='monospaced'>
python do_deploy_setscene () {
sstate_setscene(d)
}
addtask do_deploy_setscene
</literallayout>
<filename>sstate_setscene()</filename> takes the flags
above as input and accelerates the
<filename>do_deploy</filename> task through the
shared state cache if possible.
If the task was accelerated,
<filename>sstate_setscene()</filename> returns True.
Otherwise, it returns False, and the normal
<filename>do_deploy</filename> task runs.
For more information, see the
"<ulink url='&YOCTO_DOCS_BB_URL;#setscene'>setscene</ulink>"
section in the BitBake User Manual.
</para></listitem>
<listitem><para>
The <filename>do_deploy[dirs] = "${DEPLOYDIR} ${B}"</filename>
line creates <filename>${DEPLOYDIR}</filename> and
<filename>${B}</filename> before the
<filename>do_deploy</filename> task runs, and also sets
the current working directory of
<filename>do_deploy</filename> to
<filename>${B}</filename>.
For more information, see the
"<ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'>Variable Flags</ulink>"
section in the BitBake User Manual.
<note>
In cases where
<filename>sstate-inputdirs</filename> and
<filename>sstate-outputdirs</filename> would be the
same, you can use
<filename>sstate-plaindirs</filename>.
For example, to preserve the
<filename>${PKGD}</filename> and
<filename>${PKGDEST}</filename> output from the
<link linkend='ref-tasks-package'><filename>do_package</filename></link>
task, use the following:
<literallayout class='monospaced'>
do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST}"
</literallayout>
</note>
</para></listitem>
<listitem><para>
<filename>sstate-inputdirs</filename> and
<filename>sstate-outputdirs</filename> can also be used
with multiple directories.
For example, the following declares
<filename>PKGDESTWORK</filename> and
<filename>SHLIBWORK</filename> as shared state
input directories, which populates the shared state
cache, and <filename>PKGDATA_DIR</filename> and
<filename>SHLIBSDIR</filename> as the corresponding
shared state output directories:
<literallayout class='monospaced'>
do_package[sstate-inputdirs] = "${PKGDESTWORK} ${SHLIBSWORKDIR}"
do_package[sstate-outputdirs] = "${PKGDATA_DIR} ${SHLIBSDIR}"
</literallayout>
</para></listitem>
<listitem><para>
These methods also include the ability to take a lockfile
when manipulating shared state directory structures,
for cases where file additions or removals are sensitive:
<literallayout class='monospaced'>
do_package[sstate-lockfile] = "${PACKAGELOCK}"
</literallayout>
</para></listitem>
</itemizedlist>
</para>
<!--
<para>
In this example, we add some extra flags to the task, a name field ("deploy"), an
input directory where the task sends data, and the output
directory where the data from the task should eventually be copied.
We also add a <filename>_setscene</filename> variant of the task and add the task
name to the <filename>SSTATETASKS</filename> list.
</para>
<para>
If you have a directory whose contents you need to preserve, you can do this with
a line like the following:
<literallayout class='monospaced'>
do_package[sstate-plaindirs] = "${PKGD} ${PKGDEST}"
</literallayout>
This method, as well as the following example, also works for multiple directories.
<literallayout class='monospaced'>
do_package[sstate-inputdirs] = "${PKGDESTWORK} ${SHLIBSWORKDIR}"
do_package[sstate-outputdirs] = "${PKGDATA_DIR} ${SHLIBSDIR}"
do_package[sstate-lockfile] = "${PACKAGELOCK}"
</literallayout>
These methods also include the ability to take a lockfile when manipulating
shared state directory structures since some cases are sensitive to file
additions or removals.
</para>
-->
<para>
Behind the scenes, the shared state code works by looking in
<link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link> and
<link linkend='var-SSTATE_MIRRORS'><filename>SSTATE_MIRRORS</filename></link>
for shared state files.
Here is an example:
<literallayout class='monospaced'>
SSTATE_MIRRORS ?= "\
file://.* http://someserver.tld/share/sstate/PATH;downloadfilename=PATH \n \
file://.* file:///some/local/dir/sstate/PATH"
</literallayout>
<note>
The shared state directory (<filename>SSTATE_DIR</filename>) is
organized into two-character subdirectories, where the subdirectory
names are based on the first two characters of the hash.
If the shared state directory structure for a mirror has the
same structure as <filename>SSTATE_DIR</filename>, you must
specify "PATH" as part of the URI to enable the build system
to map to the appropriate subdirectory.
</note>
</para>
<para>
The shared state package validity can be detected just by looking at the
filename since the filename contains the task checksum (or signature) as
described earlier in this section.
If a valid shared state package is found, the build process downloads it
and uses it to accelerate the task.
</para>
<para>
The build processes use the <filename>*_setscene</filename> tasks
for the task acceleration phase.
BitBake goes through this phase before the main execution code and tries
to accelerate any tasks for which it can find shared state packages.
If a shared state package for a task is available, the shared state
package is used.
This means the task and any tasks on which it is dependent are not
executed.
</para>
<para>
As a real world example, the aim is when building an IPK-based image,
only the
<link linkend='ref-tasks-package_write_ipk'><filename>do_package_write_ipk</filename></link>
tasks would have their
shared state packages fetched and extracted.
Since the sysroot is not used, it would never get extracted.
This is another reason why a task-based approach is preferred over a
recipe-based approach, which would have to install the output from every task.
</para>
</section>
<section id='tips-and-tricks'>
<title>Tips and Tricks</title>
<para>
The code in the build system that supports incremental builds is not
simple code.
This section presents some tips and tricks that help you work around
issues related to shared state code.
</para>
<section id='debugging'>
<title>Debugging</title>
<para>
Seeing what metadata went into creating the input signature
of a shared state (sstate) task can be a useful debugging aid.
This information is available in signature information
(<filename>siginfo</filename>) files in
<link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>.
For information on how to view and interpret information in
<filename>siginfo</filename> files, see the
"<link linkend='usingpoky-viewing-task-variable-dependencies'>Viewing Task Variable Dependencies</link>"
section.
</para>
</section>
<section id='invalidating-shared-state'>
<title>Invalidating Shared State</title>
<para>
The OpenEmbedded build system uses checksums and shared state
cache to avoid unnecessarily rebuilding tasks.
Collectively, this scheme is known as "shared state code."
</para>
<para>
As with all schemes, this one has some drawbacks.
It is possible that you could make implicit changes to your
code that the checksum calculations do not take into
account.
These implicit changes affect a task's output but do not trigger
the shared state code into rebuilding a recipe.
Consider an example during which a tool changes its output.
Assume that the output of <filename>rpmdeps</filename> changes.
The result of the change should be that all the
<filename>package</filename> and
<filename>package_write_rpm</filename> shared state cache
items become invalid.
However, because the change to the output is
external to the code and therefore implicit,
the associated shared state cache items do not become
invalidated.
In this case, the build process uses the cached items rather
than running the task again.
Obviously, these types of implicit changes can cause problems.
</para>
<para>
To avoid these problems during the build, you need to
understand the effects of any changes you make.
Realize that changes you make directly to a function
are automatically factored into the checksum calculation.
Thus, these explicit changes invalidate the associated area of
shared state cache.
However, you need to be aware of any implicit changes that
are not obvious changes to the code and could affect the output
of a given task.
</para>
<para>
When you identify an implicit change, you can easily take steps
to invalidate the cache and force the tasks to run.
The steps you can take are as simple as changing a function's
comments in the source code.
For example, to invalidate package shared state files, change
the comment statements of
<link linkend='ref-tasks-package'><filename>do_package</filename></link>
or the comments of one of the functions it calls.
Even though the change is purely cosmetic, it causes the
checksum to be recalculated and forces the OpenEmbedded build
system to run the task again.
</para>
<note>
For an example of a commit that makes a cosmetic change to
invalidate shared state, see this
<ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/commit/meta/classes/package.bbclass?id=737f8bbb4f27b4837047cb9b4fbfe01dfde36d54'>commit</ulink>.
</note>
</section>
</section>
</section>
<section id='automatically-added-runtime-dependencies'>
<title>Automatically Added Runtime Dependencies</title>
<para>
The OpenEmbedded build system automatically adds common types of
runtime dependencies between packages, which means that you do not
need to explicitly declare the packages using
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>.
Three automatic mechanisms exist (<filename>shlibdeps</filename>,
<filename>pcdeps</filename>, and <filename>depchains</filename>) that
handle shared libraries, package configuration (pkg-config) modules,
and <filename>-dev</filename> and <filename>-dbg</filename> packages,
respectively.
For other types of runtime dependencies, you must manually declare
the dependencies.
<itemizedlist>
<listitem><para>
<filename>shlibdeps</filename>:
During the
<link linkend='ref-tasks-package'><filename>do_package</filename></link>
task of each recipe, all shared libraries installed by the
recipe are located.
For each shared library, the package that contains the shared
library is registered as providing the shared library.
More specifically, the package is registered as providing the
<ulink url='https://en.wikipedia.org/wiki/Soname'>soname</ulink>
of the library.
The resulting shared-library-to-package mapping
is saved globally in
<link linkend='var-PKGDATA_DIR'><filename>PKGDATA_DIR</filename></link>
by the
<link linkend='ref-tasks-packagedata'><filename>do_packagedata</filename></link>
task.</para>
<para>Simultaneously, all executables and shared libraries
installed by the recipe are inspected to see what shared
libraries they link against.
For each shared library dependency that is found,
<filename>PKGDATA_DIR</filename> is queried to
see if some package (likely from a different recipe) contains
the shared library.
If such a package is found, a runtime dependency is added from
the package that depends on the shared library to the package
that contains the library.</para>
<para>The automatically added runtime dependency also includes
a version restriction.
This version restriction specifies that at least the current
version of the package that provides the shared library must be
used, as if
"<replaceable>package</replaceable> (>= <replaceable>version</replaceable>)"
had been added to
<link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>.
This forces an upgrade of the package containing the shared
library when installing the package that depends on the
library, if needed.</para>
<para>If you want to avoid a package being registered as
providing a particular shared library (e.g. because the library
is for internal use only), then add the library to
<link linkend='var-PRIVATE_LIBS'><filename>PRIVATE_LIBS</filename></link>
inside the package's recipe.
</para></listitem>
<listitem><para>
<filename>pcdeps</filename>:
During the
<link linkend='ref-tasks-package'><filename>do_package</filename></link>
task of each recipe, all pkg-config modules
(<filename>*.pc</filename> files) installed by the recipe are
located.
For each module, the package that contains the module is
registered as providing the module.
The resulting module-to-package mapping is saved globally in
<link linkend='var-PKGDATA_DIR'><filename>PKGDATA_DIR</filename></link>
by the
<link linkend='ref-tasks-packagedata'><filename>do_packagedata</filename></link>
task.</para>
<para>Simultaneously, all pkg-config modules installed by the
recipe are inspected to see what other pkg-config modules they
depend on.
A module is seen as depending on another module if it contains
a "Requires:" line that specifies the other module.
For each module dependency,
<filename>PKGDATA_DIR</filename> is queried to see if some
package contains the module.
If such a package is found, a runtime dependency is added from
the package that depends on the module to the package that
contains the module.
<note>
The <filename>pcdeps</filename> mechanism most often infers
dependencies between <filename>-dev</filename> packages.
</note>
</para></listitem>
<listitem><para>
<filename>depchains</filename>:
If a package <filename>foo</filename> depends on a package
<filename>bar</filename>, then <filename>foo-dev</filename>
and <filename>foo-dbg</filename> are also made to depend on
<filename>bar-dev</filename> and <filename>bar-dbg</filename>,
respectively.
Taking the <filename>-dev</filename> packages as an example,
the <filename>bar-dev</filename> package might provide
headers and shared library symlinks needed by
<filename>foo-dev</filename>, which shows the need
for a dependency between the packages.</para>
<para>The dependencies added by <filename>depchains</filename>
are in the form of
<link linkend='var-RRECOMMENDS'><filename>RRECOMMENDS</filename></link>.
<note>
By default, <filename>foo-dev</filename> also has an
<filename>RDEPENDS</filename>-style dependency on
<filename>foo</filename>, because the default value of
<filename>RDEPENDS_${PN}-dev</filename> (set in
<filename>bitbake.conf</filename>) includes
"${PN}".
</note></para>
<para>To ensure that the dependency chain is never broken,
<filename>-dev</filename> and <filename>-dbg</filename>
packages are always generated by default, even if the packages
turn out to be empty.
See the
<link linkend='var-ALLOW_EMPTY'><filename>ALLOW_EMPTY</filename></link>
variable for more information.
</para></listitem>
</itemizedlist>
</para>
<para>
The <filename>do_package</filename> task depends on the
<link linkend='ref-tasks-packagedata'><filename>do_packagedata</filename></link>
task of each recipe in
<link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
through use of a
<filename>[</filename><ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'><filename>deptask</filename></ulink><filename>]</filename>
declaration, which guarantees that the required
shared-library/module-to-package mapping information will be available
when needed as long as <filename>DEPENDS</filename> has been
correctly set.
</para>
</section>
<section id='fakeroot-and-pseudo'>
<title>Fakeroot and Pseudo</title>
<para>
Some tasks are easier to implement when allowed to perform certain
operations that are normally reserved for the root user.
For example, the
<link linkend='ref-tasks-install'><filename>do_install</filename></link>
task benefits from being able to set the UID and GID of installed files
to arbitrary values.
</para>
<para>
One approach to allowing tasks to perform root-only operations
would be to require BitBake to run as root.
However, this method is cumbersome and has security issues.
The approach that is actually used is to run tasks that benefit from
root privileges in a "fake" root environment.
Within this environment, the task and its child processes believe that
they are running as the root user, and see an internally consistent
view of the filesystem.
As long as generating the final output (e.g. a package or an image)
does not require root privileges, the fact that some earlier steps ran
in a fake root environment does not cause problems.
</para>
<para>
The capability to run tasks in a fake root environment is known as
"fakeroot", which is derived from the BitBake keyword/variable
flag that requests a fake root environment for a task.
In current versions of the OpenEmbedded build system,
the program that implements fakeroot is known as Pseudo.
</para>
<para>
Pseudo overrides system calls through the
<filename>LD_PRELOAD</filename> mechanism to give the
illusion of running as root.
To keep track of "fake" file ownership and permissions resulting from
operations that require root permissions, an sqlite3
database is used.
This database is stored in
<filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}/pseudo/files.db</filename>
for individual recipes.
Storing the database in a file as opposed to in memory
gives persistence between tasks, and even between builds.
<note><title>Caution</title>
If you add your own task that manipulates the same files or
directories as a fakeroot task, then that task should also run
under fakeroot.
Otherwise, the task will not be able to run root-only operations,
and will not see the fake file ownership and permissions set by the
other task.
You should also add a dependency on
<filename>virtual/fakeroot-native:do_populate_sysroot</filename>,
giving the following:
<literallayout class='monospaced'>
fakeroot do_mytask () {
...
}
do_mytask[depends] += "virtual/fakeroot-native:do_populate_sysroot"
</literallayout>
</note>
For more information, see the
<ulink url='&YOCTO_DOCS_BB_URL;#var-FAKEROOT'><filename>FAKEROOT*</filename></ulink>
variables in the BitBake User Manual.
You can also reference this
<ulink url='http://www.ibm.com/developerworks/opensource/library/os-aapseudo1/index.html'>Pseudo</ulink>
article.
</para>
</section>
<section id='x32'>
<title>x32</title>
<para>
x32 is a processor-specific Application Binary Interface (psABI) for x86_64.
An ABI defines the calling conventions between functions in a processing environment.
The interface determines what registers are used and what the sizes are for various C data types.
</para>
<para>
Some processing environments prefer using 32-bit applications even when running
on Intel 64-bit platforms.
Consider the i386 psABI, which is a very old 32-bit ABI for Intel 64-bit platforms.
The i386 psABI does not provide efficient use and access of the Intel 64-bit processor resources,
leaving the system underutilized.
Now consider the x86_64 psABI.
This ABI is newer and uses 64-bits for data sizes and program pointers.
The extra bits increase the footprint size of the programs, libraries,
and also increases the memory and file system size requirements.
Executing under the x32 psABI enables user programs to utilize CPU and system resources
more efficiently while keeping the memory footprint of the applications low.
Extra bits are used for registers but not for addressing mechanisms.
</para>
<section id='support'>
<title>Support</title>
<para>
This Yocto Project release supports the final specifications of x32
psABI.
Support for x32 psABI exists as follows:
<itemizedlist>
<listitem><para>You can create packages and images in x32 psABI format on x86_64 architecture targets.
</para></listitem>
<listitem><para>You can successfully build many recipes with the x32 toolchain.</para></listitem>
<listitem><para>You can create and boot <filename>core-image-minimal</filename> and
<filename>core-image-sato</filename> images.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='completing-x32'>
<title>Completing x32</title>
<para>
Future Plans for the x32 psABI in the Yocto Project include the following:
<itemizedlist>
<listitem><para>Enhance and fix the few remaining recipes so they
work with and support x32 toolchains.</para></listitem>
<listitem><para>Enhance RPM Package Manager (RPM) support for x32 binaries.</para></listitem>
<listitem><para>Support larger images.</para></listitem>
</itemizedlist>
</para>
</section>
<section id='using-x32-right-now'>
<title>Using x32 Right Now</title>
<para>
Follow these steps to use the x32 spABI:
<itemizedlist>
<listitem><para>Enable the x32 psABI tuning file for <filename>x86_64</filename>
machines by editing the <filename>conf/local.conf</filename> like this:
<literallayout class='monospaced'>
MACHINE = "qemux86-64"
DEFAULTTUNE = "x86-64-x32"
baselib = "${@d.getVar('BASE_LIB_tune-' + (d.getVar('DEFAULTTUNE', True) \
or 'INVALID'), True) or 'lib'}"
#MACHINE = "genericx86"
#DEFAULTTUNE = "core2-64-x32"
</literallayout></para></listitem>
<listitem><para>As usual, use BitBake to build an image that supports the x32 psABI.
Here is an example:
<literallayout class='monospaced'>
$ bitbake core-image-sato
</literallayout></para></listitem>
<listitem><para>As usual, run your image using QEMU:
<literallayout class='monospaced'>
$ runqemu qemux86-64 core-image-sato
</literallayout></para></listitem>
</itemizedlist>
</para>
</section>
</section>
<section id="wayland">
<title>Wayland</title>
<para>
<ulink url='http://en.wikipedia.org/wiki/Wayland_(display_server_protocol)'>Wayland</ulink>
is a computer display server protocol that
provides a method for compositing window managers to communicate
directly with applications and video hardware and expects them to
communicate with input hardware using other libraries.
Using Wayland with supporting targets can result in better control
over graphics frame rendering than an application might otherwise
achieve.
</para>
<para>
The Yocto Project provides the Wayland protocol libraries and the
reference
<ulink url='http://en.wikipedia.org/wiki/Wayland_(display_server_protocol)#Weston'>Weston</ulink>
compositor as part of its release.
This section describes what you need to do to implement Wayland and
use the compositor when building an image for a supporting target.
</para>
<section id="wayland-support">
<title>Support</title>
<para>
The Wayland protocol libraries and the reference Weston compositor
ship as integrated packages in the <filename>meta</filename> layer
of the
<ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
Specifically, you can find the recipes that build both Wayland
and Weston at <filename>meta/recipes-graphics/wayland</filename>.
</para>
<para>
You can build both the Wayland and Weston packages for use only
with targets that accept the
<ulink url='http://dri.freedesktop.org/wiki/'>Mesa 3D and Direct Rendering Infrastructure</ulink>,
which is also known as Mesa DRI.
This implies that you cannot build and use the packages if your
target uses, for example, the
<trademark class='registered'>Intel</trademark> Embedded Media and
Graphics Driver (<trademark class='registered'>Intel</trademark>
EMGD) that overrides Mesa DRI.
</para>
<note>
Due to lack of EGL support, Weston 1.0.3 will not run directly on
the emulated QEMU hardware.
However, this version of Weston will run under X emulation without
issues.
</note>
</section>
<section id="enabling-wayland-in-an-image">
<title>Enabling Wayland in an Image</title>
<para>
To enable Wayland, you need to enable it to be built and enable
it to be included in the image.
</para>
<section id="enable-building">
<title>Building</title>
<para>
To cause Mesa to build the <filename>wayland-egl</filename>
platform and Weston to build Wayland with Kernel Mode
Setting
(<ulink url='https://wiki.archlinux.org/index.php/Kernel_Mode_Setting'>KMS</ulink>)
support, include the "wayland" flag in the
<link linkend="var-DISTRO_FEATURES"><filename>DISTRO_FEATURES</filename></link>
statement in your <filename>local.conf</filename> file:
<literallayout class='monospaced'>
DISTRO_FEATURES_append = " wayland"
</literallayout>
</para>
<note>
If X11 has been enabled elsewhere, Weston will build Wayland
with X11 support
</note>
</section>
<section id="enable-installation-in-an-image">
<title>Installing</title>
<para>
To install the Wayland feature into an image, you must
include the following
<link linkend='var-CORE_IMAGE_EXTRA_INSTALL'><filename>CORE_IMAGE_EXTRA_INSTALL</filename></link>
statement in your <filename>local.conf</filename> file:
<literallayout class='monospaced'>
CORE_IMAGE_EXTRA_INSTALL += "wayland weston"
</literallayout>
</para>
</section>
</section>
<section id="running-weston">
<title>Running Weston</title>
<para>
To run Weston inside X11, enabling it as described earlier and
building a Sato image is sufficient.
If you are running your image under Sato, a Weston Launcher appears
in the "Utility" category.
</para>
<para>
Alternatively, you can run Weston through the command-line
interpretor (CLI), which is better suited for development work.
To run Weston under the CLI, you need to do the following after
your image is built:
<orderedlist>
<listitem><para>Run these commands to export
<filename>XDG_RUNTIME_DIR</filename>:
<literallayout class='monospaced'>
mkdir -p /tmp/$USER-weston
chmod 0700 /tmp/$USER-weston
export XDG_RUNTIME_DIR=/tmp/$USER-weston
</literallayout></para></listitem>
<listitem><para>Launch Weston in the shell:
<literallayout class='monospaced'>
weston
</literallayout></para></listitem>
</orderedlist>
</para>
</section>
</section>
<section id="licenses">
<title>Licenses</title>
<para>
This section describes the mechanism by which the OpenEmbedded build system
tracks changes to licensing text.
The section also describes how to enable commercially licensed recipes,
which by default are disabled.
</para>
<para>
For information that can help you maintain compliance with various open
source licensing during the lifecycle of the product, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Project's Lifecycle</ulink>" section
in the Yocto Project Development Manual.
</para>
<section id="usingpoky-configuring-LIC_FILES_CHKSUM">
<title>Tracking License Changes</title>
<para>
The license of an upstream project might change in the future.
In order to prevent these changes going unnoticed, the
<filename><link linkend='var-LIC_FILES_CHKSUM'>LIC_FILES_CHKSUM</link></filename>
variable tracks changes to the license text. The checksums are validated at the end of the
configure step, and if the checksums do not match, the build will fail.
</para>
<section id="usingpoky-specifying-LIC_FILES_CHKSUM">
<title>Specifying the <filename>LIC_FILES_CHKSUM</filename> Variable</title>
<para>
The <filename>LIC_FILES_CHKSUM</filename>
variable contains checksums of the license text in the source code for the recipe.
Following is an example of how to specify <filename>LIC_FILES_CHKSUM</filename>:
<literallayout class='monospaced'>
LIC_FILES_CHKSUM = "file://COPYING;md5=xxxx \
file://licfile1.txt;beginline=5;endline=29;md5=yyyy \
file://licfile2.txt;endline=50;md5=zzzz \
..."
</literallayout>
</para>
<para>
The build system uses the
<filename><link linkend='var-S'>S</link></filename> variable as
the default directory when searching files listed in
<filename>LIC_FILES_CHKSUM</filename>.
The previous example employs the default directory.
</para>
<para>
Consider this next example:
<literallayout class='monospaced'>
LIC_FILES_CHKSUM = "file://src/ls.c;beginline=5;endline=16;\
md5=bb14ed3c4cda583abc85401304b5cd4e"
LIC_FILES_CHKSUM = "file://${WORKDIR}/license.html;md5=5c94767cedb5d6987c902ac850ded2c6"
</literallayout>
</para>
<para>
The first line locates a file in
<filename>${S}/src/ls.c</filename>.
The second line refers to a file in
<filename><link linkend='var-WORKDIR'>WORKDIR</link></filename>.
</para>
<para>
Note that <filename>LIC_FILES_CHKSUM</filename> variable is
mandatory for all recipes, unless the
<filename>LICENSE</filename> variable is set to "CLOSED".
</para>
</section>
<section id="usingpoky-LIC_FILES_CHKSUM-explanation-of-syntax">
<title>Explanation of Syntax</title>
<para>
As mentioned in the previous section, the
<filename>LIC_FILES_CHKSUM</filename> variable lists all the
important files that contain the license text for the source code.
It is possible to specify a checksum for an entire file, or a specific section of a
file (specified by beginning and ending line numbers with the "beginline" and "endline"
parameters, respectively).
The latter is useful for source files with a license notice header,
README documents, and so forth.
If you do not use the "beginline" parameter, then it is assumed that the text begins on the
first line of the file.
Similarly, if you do not use the "endline" parameter, it is assumed that the license text
ends with the last line of the file.
</para>
<para>
The "md5" parameter stores the md5 checksum of the license text.
If the license text changes in any way as compared to this parameter
then a mismatch occurs.
This mismatch triggers a build failure and notifies the developer.
Notification allows the developer to review and address the license text changes.
Also note that if a mismatch occurs during the build, the correct md5
checksum is placed in the build log and can be easily copied to the recipe.
</para>
<para>
There is no limit to how many files you can specify using the
<filename>LIC_FILES_CHKSUM</filename> variable.
Generally, however, every project requires a few specifications for license tracking.
Many projects have a "COPYING" file that stores the license information for all the source
code files.
This practice allows you to just track the "COPYING" file as long as it is kept up to date.
</para>
<tip>
If you specify an empty or invalid "md5" parameter, BitBake returns an md5 mis-match
error and displays the correct "md5" parameter value during the build.
The correct parameter is also captured in the build log.
</tip>
<tip>
If the whole file contains only license text, you do not need to use the "beginline" and
"endline" parameters.
</tip>
</section>
</section>
<section id="enabling-commercially-licensed-recipes">
<title>Enabling Commercially Licensed Recipes</title>
<para>
By default, the OpenEmbedded build system disables
components that have commercial or other special licensing
requirements.
Such requirements are defined on a
recipe-by-recipe basis through the
<link linkend='var-LICENSE_FLAGS'><filename>LICENSE_FLAGS</filename></link>
variable definition in the affected recipe.
For instance, the
<filename>poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly</filename>
recipe contains the following statement:
<literallayout class='monospaced'>
LICENSE_FLAGS = "commercial"
</literallayout>
Here is a slightly more complicated example that contains both an
explicit recipe name and version (after variable expansion):
<literallayout class='monospaced'>
LICENSE_FLAGS = "license_${PN}_${PV}"
</literallayout>
In order for a component restricted by a <filename>LICENSE_FLAGS</filename>
definition to be enabled and included in an image, it
needs to have a matching entry in the global
<link linkend='var-LICENSE_FLAGS_WHITELIST'><filename>LICENSE_FLAGS_WHITELIST</filename></link>
variable, which is a variable
typically defined in your <filename>local.conf</filename> file.
For example, to enable
the <filename>poky/meta/recipes-multimedia/gstreamer/gst-plugins-ugly</filename>
package, you could add either the string
"commercial_gst-plugins-ugly" or the more general string
"commercial" to <filename>LICENSE_FLAGS_WHITELIST</filename>.
See the
"<link linkend='license-flag-matching'>License Flag Matching</link>" section
for a full explanation of how <filename>LICENSE_FLAGS</filename> matching works.
Here is the example:
<literallayout class='monospaced'>
LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly"
</literallayout>
Likewise, to additionally enable the package built from the recipe containing
<filename>LICENSE_FLAGS = "license_${PN}_${PV}"</filename>, and assuming
that the actual recipe name was <filename>emgd_1.10.bb</filename>,
the following string would enable that package as well as
the original <filename>gst-plugins-ugly</filename> package:
<literallayout class='monospaced'>
LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly license_emgd_1.10"
</literallayout>
As a convenience, you do not need to specify the complete license string
in the whitelist for every package.
You can use an abbreviated form, which consists
of just the first portion or portions of the license string before
the initial underscore character or characters.
A partial string will match
any license that contains the given string as the first
portion of its license.
For example, the following
whitelist string will also match both of the packages
previously mentioned as well as any other packages that have
licenses starting with "commercial" or "license".
<literallayout class='monospaced'>
LICENSE_FLAGS_WHITELIST = "commercial license"
</literallayout>
</para>
<section id="license-flag-matching">
<title>License Flag Matching</title>
<para>
License flag matching allows you to control what recipes the
OpenEmbedded build system includes in the build.
Fundamentally, the build system attempts to match
<link linkend='var-LICENSE_FLAGS'><filename>LICENSE_FLAGS</filename></link>
strings found in recipes against
<link linkend='var-LICENSE_FLAGS_WHITELIST'><filename>LICENSE_FLAGS_WHITELIST</filename></link>
strings found in the whitelist.
A match causes the build system to include a recipe in the
build, while failure to find a match causes the build system to
exclude a recipe.
</para>
<para>
In general, license flag matching is simple.
However, understanding some concepts will help you
correctly and effectively use matching.
</para>
<para>
Before a flag
defined by a particular recipe is tested against the
contents of the whitelist, the expanded string
<filename>_${PN}</filename> is appended to the flag.
This expansion makes each <filename>LICENSE_FLAGS</filename>
value recipe-specific.
After expansion, the string is then matched against the
whitelist.
Thus, specifying
<filename>LICENSE_FLAGS = "commercial"</filename>
in recipe "foo", for example, results in the string
<filename>"commercial_foo"</filename>.
And, to create a match, that string must appear in the
whitelist.
</para>
<para>
Judicious use of the <filename>LICENSE_FLAGS</filename>
strings and the contents of the
<filename>LICENSE_FLAGS_WHITELIST</filename> variable
allows you a lot of flexibility for including or excluding
recipes based on licensing.
For example, you can broaden the matching capabilities by
using license flags string subsets in the whitelist.
<note>When using a string subset, be sure to use the part of
the expanded string that precedes the appended underscore
character (e.g. <filename>usethispart_1.3</filename>,
<filename>usethispart_1.4</filename>, and so forth).
</note>
For example, simply specifying the string "commercial" in
the whitelist matches any expanded
<filename>LICENSE_FLAGS</filename> definition that starts with
the string "commercial" such as "commercial_foo" and
"commercial_bar", which are the strings the build system
automatically generates for hypothetical recipes named
"foo" and "bar" assuming those recipes simply specify the
following:
<literallayout class='monospaced'>
LICENSE_FLAGS = "commercial"
</literallayout>
Thus, you can choose to exhaustively
enumerate each license flag in the whitelist and
allow only specific recipes into the image, or
you can use a string subset that causes a broader range of
matches to allow a range of recipes into the image.
</para>
<para>
This scheme works even if the
<filename>LICENSE_FLAGS</filename> string already
has <filename>_${PN}</filename> appended.
For example, the build system turns the license flag
"commercial_1.2_foo" into "commercial_1.2_foo_foo" and would
match both the general "commercial" and the specific
"commercial_1.2_foo" strings found in the whitelist, as
expected.
</para>
<para>
Here are some other scenarios:
<itemizedlist>
<listitem><para>You can specify a versioned string in the
recipe such as "commercial_foo_1.2" in a "foo" recipe.
The build system expands this string to
"commercial_foo_1.2_foo".
Combine this license flag with a whitelist that has
the string "commercial" and you match the flag along
with any other flag that starts with the string
"commercial".</para></listitem>
<listitem><para>Under the same circumstances, you can
use "commercial_foo" in the whitelist and the
build system not only matches "commercial_foo_1.2" but
also matches any license flag with the string
"commercial_foo", regardless of the version.
</para></listitem>
<listitem><para>You can be very specific and use both the
package and version parts in the whitelist (e.g.
"commercial_foo_1.2") to specifically match a
versioned recipe.</para></listitem>
</itemizedlist>
</para>
</section>
<section id="other-variables-related-to-commercial-licenses">
<title>Other Variables Related to Commercial Licenses</title>
<para>
Other helpful variables related to commercial
license handling exist and are defined in the
<filename>poky/meta/conf/distro/include/default-distrovars.inc</filename> file:
<literallayout class='monospaced'>
COMMERCIAL_AUDIO_PLUGINS ?= ""
COMMERCIAL_VIDEO_PLUGINS ?= ""
</literallayout>
If you want to enable these components, you can do so by making sure you have
statements similar to the following
in your <filename>local.conf</filename> configuration file:
<literallayout class='monospaced'>
COMMERCIAL_AUDIO_PLUGINS = "gst-plugins-ugly-mad \
gst-plugins-ugly-mpegaudioparse"
COMMERCIAL_VIDEO_PLUGINS = "gst-plugins-ugly-mpeg2dec \
gst-plugins-ugly-mpegstream gst-plugins-bad-mpegvideoparse"
LICENSE_FLAGS_WHITELIST = "commercial_gst-plugins-ugly commercial_gst-plugins-bad commercial_qmmp"
</literallayout>
Of course, you could also create a matching whitelist
for those components using the more general "commercial"
in the whitelist, but that would also enable all the
other packages with
<link linkend='var-LICENSE_FLAGS'><filename>LICENSE_FLAGS</filename></link>
containing "commercial", which you may or may not want:
<literallayout class='monospaced'>
LICENSE_FLAGS_WHITELIST = "commercial"
</literallayout>
</para>
<para>
Specifying audio and video plug-ins as part of the
<filename>COMMERCIAL_AUDIO_PLUGINS</filename> and
<filename>COMMERCIAL_VIDEO_PLUGINS</filename> statements
(along with the enabling
<filename>LICENSE_FLAGS_WHITELIST</filename>) includes the
plug-ins or components into built images, thus adding
support for media formats or components.
</para>
</section>
</section>
</section>
</chapter>
<!--
vim: expandtab tw=80 ts=4
-->